1 / 45

Slides on case selection, case studies, and interviewing

Slides on case selection, case studies, and interviewing. Knowing what to observe. Causal inference is the objective of social science Involves learning about facts we do not observe based on what we do observe Recall that we cannot observe causation directly

vinnie
Télécharger la présentation

Slides on case selection, case studies, and interviewing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Slides oncase selection, case studies,and interviewing

  2. Knowing what to observe • Causal inference is the objective of social science • Involves learning about facts we do not observe based on what we do observe • Recall that we cannot observe causation directly • This defines what we select for observation

  3. Correlation and causation • We observe association… • …but make inferences about causation • Making the case for causation: • Temporal: change in IV occurs prior to change in DV • Alternative explanations for association have been rebutted • Endogeneity • That a third variable causes changes in both the supposed IV and DV • Another variable is the real IV – and after controlling for it, the association between the supposed IV and DV disappears

  4. Random selection • Each case from the population has a known probability of being selected • The selection is probabilistic – rather than intentional • Not haphazard or arbitrary • Real advantage is that this selection criteria is independent of values of the cases on the DV and IV • Unbiased results • Different types of random selection methods; more on that later

  5. Intentional selection • Involves selecting cases using knowledge of the cases’ values on the IVs (sometimes the DV too) • When random selection is impossible or inappropriate • No sampling frame • Small populations and/or samples • Important cases cannot be missed

  6. Eliminating alternative causal interpretations • Control for other causes (Independent Variables: IVs) • Experimental design • Random assignment of values of the IV to observations – only then do we observe values on DV • Use of control group • Use of pre-treatment measurements of DV

  7. Quasi-experiments more common in poli. sci. • The “natural” experiment (would be more accurate to call this non-experimental research) • Use of a control group • But the researcher has not assigned observations to the treatment or control group • Often no pre-treatment measure

  8. Statistical control • Examine the relationship between an IV and a DV while holding constant the value of other variables • E.g. Height and test scores controlling for age • Simpson’s paradox: the direction of the association changes when controlling for another variable

  9. 12 8 10 12 8 10 12 8 10 12 8 10 An example of a spurious relationship Math test score Where numbers refer to the ages of the pupils Height

  10. Regression to the mean • We observe random / non-systematic variation in the values of variables • E.g. yearly oscillations in crime statistics • Implication for the selection of cases for making causal inference • A high value, followed by a policy intervention, followed by a lower value, does not necessarily indicate an effect of the intervention • Solution: larger number of observations before and after

  11. Avoid indeterminacy • Research designs that are indeterminate • Where due to the cases selected, it is impossible to make valid inferences about causal effects • Can result from having more inferences than observations • …multicollinearity: where the IVs are correlated very strongly • Difference between quantitative and qualitative research

  12. Multicollinearity • One of the supposed IVs is a perfect function of the other • No variation in one of the IVs at a given value of the other IV • E.g. Democracies (IV1) and trading partners (IV2) do not go to war (DV) with each other – then only examining democracies that trade • Success in joint research collaboration (DV) is caused by non-rivalry: pairs of countries that are different in size (IV1) and that do not trade with the same third countries (IV2) – then only examining differently-sized countries that do not trade

  13. Avoid selection bias • Where the cases are not representative of the population you are trying to make inferences about • The associations are distorted, and apply only to the group of specific cases selected • Worst and most obvious: selecting cases to support favorite hypothesis • But there are other, more subtle manifestations • Selecting on the availability of data • What the historical record preserves

  14. Avoid selecting on the DV • No variation in the DV • Akin to Mills method of agreement • Cannot be sure that the absence of the observed values on the IVs would be associated with different values on the DV • Limited variation in the DV • Underestimates the size of causal effects • Van Evera: is this “a methodological myth”?

  15. Some examples of selection bias • Porter (1990) The Competitive Advantage of Nations • Rational Deterence Theory (See article on reading list by Achen and Snidal (1989)) • Causes of industrial unrest / strikes And a study that recognised and avoided selection bias • Tilly (1975) The formation of Nation States in Western Europe

  16. Selecting on IV • Selecting on cases with only a restricted set of values on the IV • Does not bias causal inferences • But if there is no variation on the IV, no causal inferences can be made • E.g. the effect of single-party government on pledge fulfilment by studying only single-party governments • E.g. the effect of industrialisation on the prestige attributed to various occupations by studying only industrialised countries • In general, maximise variation on IV

  17. Making the most of the available information • Maximising the number of observations • Examining lower levels of aggregation than the entire “case” • E.g. specific decisions within the Cuban missile crisis • Avoid throwing away data by aggregating them

  18. Case studies

  19. Case study research • “Intensive study of a single unit for the purpose of understanding a larger class of (similar) units” (Gerring 2004) • Case: “a phenomenon for which we report and interpret only a single measure on any pertinent variable” (Eckstein 1975) • Qualitative, small-n, ethnographic, clinical, participant-observation or otherwise “in the field” (Yin 1994) • Characterised by process-tracing (see Van Evera 1997, Chap. 2)

  20. The ontological position (Gerring 2004) Case study ideal Utility of case study design Nomothetic Ideographic Assumed comparability of potential units

  21. n=1 case studies • Rare, although thought to be typical of case study research • When n=1 might be appropriate • A theory may generate precise prediction • Leading to a crucial case study • A “least likely” observation (Eckstein 1975)

  22. Single observations cannot provide sufficient evidence • When there are alternative explanations • Indeterminate research design: more inferences than observations • When there is measurement error • In either dependent or independent variables • When the causes are not deterministic • Either because of unknown conditions or fundamental probabilistic nature

  23. Where can more observations be found in a single “case”? • Ask: What are the possible observable implications of the theory or hypothesis being tested? • How many instances of these implications can be found in this case?

  24. Look within units • Comparability of observations • Spatial variation • E.g. subnational regions and communities • Sectoral variation • E.g. within different policy areas / government agencies • Temporal variation • NB: may not be “independent” but still provide additional information

  25. Measure new implications of theory • Same observations, but different measurements • E.g. an hypothesis that predicts social unrest may also have implications for voting behaviour, business investment and/or emigration • E.g. Putman’s (1993) many measures of government performance

  26. Considerations that drive the need for more observations • Variability in DV • Need for certainty about existence and magnitude of cause • Multicollinearity: large overlap between different IVs • Little variation on IV

  27. What are case studies good for? • Different schools of thought • “Historical wisdom about the limits of current theory, and empirical generalization to be explained by future theory”…but not… “theory construction and theory verification” (Achen and Snidal 1989) • Theory testing, causal inference (see Van Evera 1997)

  28. Achen & Snidal on Rational Deterrence Theory • The theory: a very general set of propositions • The rational actor assumption • Variation in outcomes explained by differences in actors’ opportunities • States act as if they are unitary and rational • Predictions: use of threats to make other actors behave in desirable ways • Real-world implications • “rationality of irrationality” • Dangers of disarmament • Balance of power

  29. Case study evidence is said to refute deterrence theory • Many examples where deterrence failed to avert conflict • Many other variables that inform policymakers choices (e.g. domestic factors, ideology) • Decision-makers do not carry out rational calculations

  30. Selection bias • Researchers have studied crises • Cases where deterrence failed • Cannot study crises that have been averted • Could, however, study pairs of countries with serious conflicts, whether or not these result in use of force • Problem of no variance on the DV

  31. Lists of other important variables • A theory’s explanatory power does not mean that it has to match the historical record of a particular case • Lists of other important variables are not “theory” in the sense of a set of general assumptions about how people act from which hypotheses are derived

  32. Decision-makers’ calculations • Process tracing (see Van Evera 1997) • Identify mechanisms though which a particular outcome was reached (e.g. how bipolarity leads to peace) • Look at the individual decisions’ leading up to the final outcome to be explained • Often involves identification of decision-makers’ perceptions and “reasons” for action

  33. The descriptivist fallacy • Rational deterrence theory does not refer to decision-makers’ perceptions or beliefs • In general, rational choice theory does not refer to mental calculations • Holds only that they act “as if” they solved certain problems, whether or not they actually solved them

  34. Survey research

  35. Survey research • When standardised questionnaires are appropriate: • Research questions about large populations of individuals and/or organisations • Well defined concepts • Confidence about relevant variables

  36. Sampling error in survey research • Even when using random samples from populations, and with high response rates, there will still be error • Error due to having a sample rather than all of the cases we are interested in • Causes uncertainty, but not bias • Reduced by increasing number of observations

  37. Non-response • When non-respondents differ from respondents on one or more variables of interest • Attempts to reduce the problem by: • Call backs • Weighting cases/individuals from underrepresented parts of the population more heavily

  38. Validity issues in measurement • Question wording • Keep it simple (see Pettigrew on Holocaust question) • Avoid double-barrel questions • Avoid value-laden terms • Use closed-ended questions where possible • Question ordering • Try to put open-ended questions first

  39. Levels of analysis • Political science survey research often involves moving between micro and macro levels • So beware of: • Compositional fallacy • Ecological fallacy

  40. Ronald Inglehart 2003. How Solid is Mass Support for Democracy – And How Can We Measure It? PSonline, January 2003 www.apsanet.org

  41. Semi-structured interviews • Face-to-face, open ended, qualitative • E.g. questions about lobbying activities • When the researcher has little information about the variables of interest • For particular types of interviewee (elites, highly educated)

More Related