1 / 31

Judging Qualitative Research

Judging Qualitative Research. The Role of the Reader. "There are no operationally defined truth tests to apply to qualitative research" ( Eisner, 1991, p. 53 ).

haroun
Télécharger la présentation

Judging Qualitative Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Judging Qualitative Research

  2. The Role of the Reader • "There are no operationally defined truth tests to apply to qualitative research" (Eisner, 1991, p. 53). • Researcher and readers "share a joint responsibility" for establishing the value of the qualitative research product (Glaser and Strauss, 1967, p. 232). • "Pragmatic validation [of qualitative research] means that the perspective presented is judged by its relevance to and use by those to whom it is presented: their perspective and actions joined to the [researcher’s] perspective and actions" (Patton, 1990, p. 485).

  3. Validity of research corresponds to degree to which it is accepted as sound, legitimate and authoritative by people with an interest in research findings. • How do we judge which perspective to use to evaluate the validity of a qualitative study? • E.g., grounded theory should theoretically sample a wide range of people; discourse analysis can be in-depth analysis of a few excerpts

  4. Process of agreeing criteria for judging qualitative research is useful because involves critically reflecting on essential ingredients & practices • But simply following guidelines does not guarantee good research – not simply a descriptive science but relies on capacity to evoke imaginative experience & reveal new meanings.

  5. 3 features • Coherence (structural corroboration or triangulation) • Consensus • Instrumental Utility "Guides call our attention to aspects of the situation or place we might otherwise miss" (Eisner, 1991, p. 59

  6. Trustworthiness "How can an inquirer persuade his or her audiences that the research findings of an inquiry are worth paying attention to?" (Lincoln and Guba 1985, p. 290). Lincoln and Guba (1985, p. 300) - an alternative set of criteria that correspond to those typically employed to judge quantitative work.

  7. Comparison of criteria Conventional termsNaturalistic terms Internal validity Credibility External validity Transferability Reliability Dependability Objectivity Confirmability

  8. Critical of use of "comparable criteria” • little different than the conventional criteria • assumes that what is known — existent or interpreted reality — stands independent of the inquirer and can be described without distortion by the inquirer • naturalistic research can offer only an interpretation of the interpretations of others • to assume an independent reality is unacceptable for many qualitative researchers

  9. e.g., Smith & Heshusius 1986 • there is no "out there" out there: the only reality is a completely mind-dependent reality, which will vary from individual to individual; • therefore, not possible to choose a best interpretation from among the many available, because no technique or interpretation can be "epistemologically privileged" (p. 9).

  10. This stance prohibits the possibility of reconciling alternative interpretations • Important to determine which criteria are consistent with the naturalistic paradigm, yet which allow for a declaration that good research has been carried out. • Select appropriate criteria for judging overall trustworthiness of a qualitative study

  11. Internal Validity vs. Credibility

  12. Credibility-A ‘toolbox’ of procedures for enhancing validity Triangulation: Enrich understanding of a phenomenon by viewing from different perspectives. 4 types: • methods triangulation; • data triangulation; • triangulation through multiple analysts; • theory triangulation. • gather data from different groups of people; gather data at different times from same people; different theories/methods ‘composite analysis’; triangulate perspectives of different researchers.

  13. Comparing researchers’ coding • Inter-rater comparison of subtle, complex coding schemes; participant feedback/respondent validation • Making segments of the raw data available for others to analyze, and the use of "member checks," in which respondents are asked to corroborate findings.

  14. Disconfirming case analysis • Deviant or negative cases, systematically searching for data that does not fit themes & patterns • Audit (paper) trail • Provide evidence linking raw data to final report

  15. Audit trail Consisting of • raw data; • analysis notes; • reconstruction and synthesis products; • process notes; • personal notes; and • preliminary developmental information *Critical component = conceptual chain of logic - mimics replicability process in conventional research.

  16. External Validity / Generalizability versus Transferability

  17. Reliability versus Dependability

  18. Objectivity versus Confirmability

  19. Empathic Neutrality • Patton (1990): the terms objectivity and subjectivity have become "ideological ammunition in the paradigms debate." Use instead "empathic neutrality" (p. 55). • Empathy "is a stance toward the people one encounters, while neutrality is a stance toward the findings" (p. 58). A researcher who is neutral tries to be non-judgmental, and strives to report what is found in a balanced way. • Lincoln and Guba (1985): the "confirmability" of the research - the degree to which the researcher can demonstrate the neutrality of the research interpretations, through a "confirmability audit."

  20. Demonstrating validity • Sensitivity to context: allow patterns & meanings NOT prespecified to emerge • Relevant theoretical & empirical literature • Socio-cultural setting • Participants’ perspectives • Ethical issues • Empirical data

  21. Commitment & rigour • through data collection • Depth/breadth of analysis • methodological competence/skill • In-depth engagement with topic

  22. Coherence & transparency • clarity & power of argument • fit between theory and method • transparent methods & data presentation • Reflexivity

  23. Impact and Importance • practical/applied • theoretical • socio-cultural

  24. Not always possible to meet some criteria, sometimes must prioritize some kinds of validity over others • A set of principles to refer to when making decisions about how to carry out & to justify your research.

  25. Reading Willig Chapter 9. Barbour, R.S. (2001) Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? British Medical Journal, 322:1115-1117. Cho, J. & Trent, A. (2006) Validity in qualitative research revisited. Qualitative Research, 6, 319-340. Harré, R. (2004) Staking our claim for qualitative psychology as science. Qualitative Research in Psychology, 1, 3-14. Henwood, K. (2004) Reinventing validity. In Todd, Z., Nerlich, B., McKeown, S. & Clarke, D.D. (Eds.). Mixing methods in Psychology: The Integration of Qualitative and Quantitative Methods in Theory and Practice. Psychology Press. Chapter 3. Parker, I. (2005) Qualitative Psychology: Introducing Radical Research. OUP. Chapter 10.

More Related