1 / 67

Evaluating Research Reports

Evaluating Research Reports. Dr. Aidh Abu Elsoud Alkaissi An-Najah National University Faculty of Nursing. THE RESEARCH CRITIQUE. Nursing practice can be based on solid evidence only if research reports are critically appraised (To estimate the quality).

issac
Télécharger la présentation

Evaluating Research Reports

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating ResearchReports Dr. Aidh Abu Elsoud Alkaissi An-Najah National University Faculty of Nursing

  2. THE RESEARCHCRITIQUE • Nursing practice can be based on solid evidence only if research reports are critically appraised (To estimate the quality). • Consumers sometimes think that if a report was accepted for publication, the study must be sound. • Unfortunately, this is not the case. Indeed, most research has limitations and weaknesses.

  3. Although disciplined research is the best possible means of answering many questions, no single study can provide conclusive evidence. • Rather, evidence is accumulated through the conduct (manage or control) — and evaluation—of several studies addressing the same or a similar research question.

  4. Consumers who can do reflective and thorough critiques of research reports also play a role in advancing nursing knowledge.

  5. Guidelines for the Conduct ofa Written Research Critique • 1. Be sure to comment on the study’s strengths as well as weaknesses. • The critique should be a balanced analysis of the study’s worth. • All reports have some positive features—be sure to find and note them. • 2. Give specific examples of the study’s strengths and limitations. Avoid vague generalizations of praise and fault finding.

  6. 3. Justify your criticisms. Offer a rationale for your concerns. • 4. Be objective. Avoid being overly critical of a study because you are not interested in the topic or because your world view is inconsistent with the underlying paradigm (model).

  7. 5. Be sensitive in handling negative comments. • Put yourself in the shoes of the researcher receiving the comments. • Do not be condescending (Displaying a patronizingly superior attitude) or sarcastic (exhibiting lack of respect). • 6. Don’t just identify problems—suggest alternatives, indicating how a different approach would have solved a methodologic problem. • Make sure the recommendations are practical.

  8. Guidelines for Critiquing Research Problems, Research Questions, and Hypotheses • 1. Has the research problem been clearly identified? • Has the researcher appropriately delimited its scope? • 2. Does the problem have significance for nursing? • How might the research contribute to nursing practice,administration, education, or policy? • 3. Is there a good fit between the research problem and the paradigm within which the research was conducted?

  9. Does the report formally present a statement of purpose, research questions, or hypotheses? • Is this information communicated clearly and concisely, and is it placed in a logical and useful location? • 5. Are purpose statements or questions worded appropriately (e.g., are key concepts/variables identified and the population of interest specified)? • 6. If there are no formal hypotheses, is their absence justifiable? Are statistical tests used despite the absence of stated hypotheses?

  10. Do hypotheses (if any) flow from a theory or previous research? • Is there a justifiable basis for the predictions? • 8. Are hypotheses (if any) properly worded—do they state a predicted relationship between two or more variables? • Are they directional or nondirectional, and is there a rationale for how they were stated? • Are they presented as research or as null hypotheses?

  11. Guidelines for Critiquing Research Literature Reviews • Does the review seem thorough—does it include all or most of the major studies conducted on the topic? • Does it include recent work? • 2. Does the review cite primarily primary sources (the original studies)? • 3. Is the review merely a summary of existing work, or does it critically appraise and compare key studies? • Does the review identify important gaps in the literature?

  12. Does the review use appropriate language, suggesting the tentativeness of prior findings? • Is the review objective? • 5. Is the review well organized? • Is the development of ideas clear? • 6. Does the review lay the foundation for undertaking the new study?

  13. Guidelines for Critiquing Theoretical and Conceptual Frameworks • Does the research report describe a theoretical or conceptual framework for the study? If not, does the absence of a theoretical framework detract (To draw or take away) from the usefulness or significance of the research? • 2. Does the report adequately describe the major features of the theory so that readers can understand the conceptual basis of the study?

  14. 3. Is the theory appropriate to the research problem? Would a different theoretical framework have been more appropriate?

  15. Is the theoretical framework based on a conceptual model of nursing, or is it borrowed from another discipline? • Is there adequate justification for the researcher’s decision about the type of framework used? • 5. Do the research problem and hypotheses flow naturally from the theoretical framework, or does the link between the problem and theory seem contrived (artificially formal) ?

  16. Guidelines for Critiquing Research Designs in Quantitative Studies • What would be the most rigorous research design for the research question? How does this correspond to the design actually used? • 2. If there is an intervention, was a true experimental, quasi-experimental, or preexperimental design used, and how does this affect the believability of the findings?

  17. 3. If there is an intervention, was it described in sufficient detail? • Was the intervention reliably implemented? • Is there evidence of treatment “dilution” or contamination of treatments? Were participation levels in the treatment high?

  18. If the design is nonexperimental, was the study inherently (be a part) nonexperimental? If not, is there an adequate justification for failure to manipulate the independent variable? • 5. What types of comparisons are specified in the design (e.g., before–after, between groups)? Do these comparisons adequately illuminate the relationship between independent and dependent variables? • If there are no comparisons, does this pose difficulties for interpreting results?

  19. 6. Was the design longitudinal (running lengthwise) or cross-sectional (Studies that collect information about a single point in time, examining conditions in one or more locations, are called cross-sectional studies)—and was this appropriate? • Was the number of data collection points reasonable? • 7. What procedures were used to control external (situational) factors, and were they adequate and appropriate?

  20. What procedures were used to control extraneous subject characteristics, and were they adequate and appropriate? • 9. To what extent is the study internally valid (the extent to which the findings of a study accurately represent the causal relationship between an intervention and an outcome in the particular )? What alternative explanations must be considered (i.e., what are the threats to the study’s internal validity)? • 10. To what extent is the study externally valid (The extent to which a finding applies (or can be generalized) to persons, objects, settings, or times other than those that were the subject of study.? What are the threats to the study’s external validity?

  21. Guidelines for Critiquing Qualitative and Mixed-Method Desig • 1. Is the research tradition for the qualitative study identified? If none was identified, can one be inferred? If • more than one was identified, is this justifiable or does it suggest “method slurring?” • 2. Is the research question congruent with the research tradition (i.e., is the domain of inquiry for the study • congruent with the domain encompassed by the tradition)? Are the data sources, research methods, and • analytic approach congruent with the research tradition? • 3. How well is the design described? Is the design appropriate, given the research question? What design elements • might have strengthened the study (e.g., a longitudinal perspective rather than a cross-sectional one)?

  22. 4. Is the study exclusively qualitative, or was the design mixed method, involving both qualitative and quantitative • data? Could the design have been strengthened by the inclusion of a quantitative component? • 5. If the study used a mixed-method design, how did the inclusion of both approaches contribute to enhanced • theoretical insights, enhanced validity, or movement toward new frontiers?

  23. Guidelines for Critiquing Quantitative Sampling Designs • 1. Is the target or accessible population identified and described? Are the eligibility criteria clearly specified? • Would a more limited population specification have controlled for important sources of extraneous variation • not covered by the research design? • 2. What type of sampling plan was used? Does the report make clear whether probability or nonprobability • sampling was used?

  24. 3. How were subjects recruited into the sample? Does the method suggest potential biases? • 4. How adequate is the sampling plan in terms of yielding a representative sample? • 5. If the sampling plan is weak (e.g., a convenience sample), are potential biases identified? Is the sampling • plan justified, given the research problem?

  25. 6. Did some factor other than the sampling plan itself (e.g., a low response rate) affect the representativeness • of the sample? Did the researcher take steps to produce a high response rate? • 7. Are the size and key characteristics of the sample described? • 8. Is the sample sufficiently large? Was the sample size justified on the basis of a power analysis? • 9. To whom can the study results reasonably be generalized?

  26. Guidelines for CritiquingQualitative Sampling Designs • 1. Is the setting or study group adequately • described? Is the setting appropriate for the • research question? • 2. Are the sample selection procedures described? • What type of sampling strategy was used? • 3. Given the information needs of the study, was • the sampling approach appropriate? Were • dimensions of the phenomenon under study • adequately represented?

  27. 4. Is the sample size adequate? Did the researcher • stipulate that information redundancy was • achieved? Do the findings suggest a richly textured • and comprehensive set of data without • any apparent “holes” or thin areas?

  28. Guidelines for DataCollection Procedures • 1. How were data collected? Were multiple methods • used and judiciously combined? • 2. Who collected the data? Were data collectors • judiciously chosen? Do they have traits (e.g., • their professional role, their relationship with • study participants) that could have undermined • the collection of unbiased, high-quality data?

  29. 3. Was the training of data collectors adequate? • Were steps taken to improve their ability to elicit • or produce high-quality data or to monitor • their performance? • 4. Where and under what circumstances were • data gathered? Was the setting for data collection • appropriate?

  30. 5. Were other people present during data collection? • Could the presence of others have resulted • in any biases? • 6. Did the collection of data place any burdens (in • terms of time, stress, privacy issues) on participants? • How might this have affected data quality?

  31. Guidelines for Critiquing Self-Reports • INTERVIEWS AND QUESTIONNAIRES • 1. Does the research question lend itself to self-report data? Would an alternative method have been more • appropriate? Should another method have been used as a supplement? • 2. How structured was the approach? Is the degree of structure consistent with the nature of the research question? • 3. Do the questions asked adequately cover the complexities of the phenomenon under investigation?

  32. 4. Did the researcher use the best possible mode for collecting self-report data (i.e., personal interviews, • telephone interviews, self-administered questionnaires), given the research question and respondent • characteristics? Would an alternative method have improved data quality?

  33. 5. [If an instrument is available for review]: Was the instrument too long or too brief? Was there an • appropriate blend of open-ended and closed-ended questions? Are questions clearly and sensitively worded? • Is the ordering of questions appropriate? Are response alternatives comprehensive? Could questions • lead to biased responses? • 6. Were the instrument and data collection procedures adequately pretested?

  34. SCALES • 7. If a scale was used, is its use justified? Does it adequately capture the construct of interest? • 8. If a new scale was developed for the study, is there adequate justification for not using an existing one? • Was the new scale adequately tested and refined? • 9. Does the report provide a rationale for using the selected scale (e.g., one particular scale to measure • stress, as opposed to other available scales)? • 10. Are procedures for eliminating or minimizing response-set biases described, and were they appropriate?

  35. Guidelines for Critiquing Observational Methods • Does the research question lend itself to an observational approach? Would an alternative data collection • method have been more appropriate? Should another method have been used as a supplement? • 2. Is the degree of structure of the observational method consistent with the research question? • 3. To what degree were observers concealed during data collection? What effect might their known presence • have had on the behaviors and events under observation?

  36. 4. What was the unit of analysis of the observations? How much inference was required on the part of the • observers, and to what extent might this have led to bias? • 5. Where did observations take place? To what extent did the setting influence the “naturalness” of behaviors • being observed? • 6. How were data recorded (e.g., on field notes or checklists)? Did the recording procedures seem appropriate? • 7. What steps were taken to minimize observer bias? How were observers trained, and how was their performance • evaluated?

  37. 8. If a category scheme was developed, did it appear appropriate? Do the categories adequately cover the • relevant behaviors? Was the scheme overly demanding of observers, leading to potential error? If the • scheme was not exhaustive, did the omission of large realms of subject behavior result in an inadequate • context for understanding the behaviors or interest? • 9. How were events or behaviors sampled? Did this plan appear to yield an adequate or representative sample • of relevant behaviors?

  38. Guidelines for CritiquingBiophysiologic Measures • 1. Does the research question lend itself to the collection • of biophysiologic data? Would an alternative • data collection method have been more • appropriate? Should another method have been • used as a supplement? • 2. Was the proper instrumentation used to obtain • the biophysiologic measurements, or would an • alternative have been more suitable?

  39. 3. Was care taken to obtain accurate data? For • example, did the researcher’s activities permit • accurate recording? • 4. Did the researcher have the skills necessary for • proper use and interpretation of the biophysiologic • measures?

  40. Guidelines for Evaluating Data Quality in Quantitative Studies • 1. Is there a congruence between the research variables as conceptualized (i.e., as discussed in the introduction) • and as operationalized (i.e., as described in the methods section)? • 2. If operational definitions (or scoring procedures) are specified, do they clearly indicate the rules of measurement? • Do the rules seem sensible? Were data collected in such a way that measurement errors were • minimized?

  41. 3. Does the report offer evidence of the reliability of measures? Does the evidence come from the research • sample itself, or is it based on other studies? If the latter, is it reasonable to conclude that data quality for • the research sample and the reliability sample would be similar (e.g., are sample characteristics similar)? • 4. If reliability is reported, which estimation method was used? Was this method appropriate? Should an alternative • or additional method of reliability appraisal have been used? Is the reliability sufficiently high?

  42. Does the report offer evidence of the validity of the measures? Does the evidence come from the research • sample itself, or is it based on other studies? If the latter, is it reasonable to believe that data quality for the • research sample and the validity sample would be similar (e.g., are the sample characteristics similar)? • 6. If validity information is reported, which validity approach was used? Was this method appropriate? Does • the validity of the instrument appear to be adequate?

  43. 7. If there is no reliability or validity information, what conclusion can you reach about the quality of the data • in the study? • 8. Were the research hypotheses supported? If not, might data quality play a role in the failure to confirm the • hypotheses?

  44. Guidelines for Evaluating Data Quality in Qualitative Studies • Does there appear to be a strong relationship between the phenomena of interest as conceptualized (i.e., • as described in the introduction) and as described in the discussion of the data collection approach? • 2. Does the report discuss efforts to enhance or evaluate the trustworthiness of the data? If not, is there other • information that allows you to conclude that data are of high quality?

  45. 3. Which techniques (if any) did the researcher use to enhance and appraise data quality? Was the investigator • in the field an adequate amount of time? Was triangulation used, and, if so, of what type? Did • the researcher search for disconfirming evidence? Were there peer debriefings or member checks? Do the • researcher’s qualifications enhance the credibility of the data? Did the report include information on • the audit trial for data analysis?

  46. 4. Were the procedures used to enchance and document data quality adequate? • 5. Given the efforts to enhance data quality, what can you conclude about the credibility, transferability, • dependability, and confirmability of the data? In light of this assessment, how much faith can be placed in • the results of the study?

  47. Guidelines for Critiquing Quantitative Analyses • 1. Does the report include any descriptive statistics? Do these statistics sufficiently describe the major characteristics • of the researcher’s data set? • 2. Were indices of both central tendency and variability provided in the report? If not, how does the absence • of this information affect the reader’s understanding of the research variables? • 3. Were the correct descriptive statistics used (e.g., was a median used when a mean would have been more • appropriate)?

  48. 4. Does the report include any inferential statistics? Was a statistical test performed for each of the hypotheses • or research questions? If inferential statistics were not used, should they have been? • 5. Was the selected statistical test appropriate, given the level of measurement of the variables? • 6. Was a parametric test used? Does it appear that the assumptions for the use of parametric tests were met? • If a nonparametric test was used, should a more powerful parametric procedure have been used instead?

  49. 7. Were any multivariate procedures used? If so, does it appear that the researcher chose the appropriate • test? If multivariate procedures were not used, should they have been? Would the use of a multivariate procedure • have improved the researcher’s ability to draw conclusions about the relationship between the • dependent and independent variables?

  50. 8. In general, does the report provide a rationale for the use of the selected statistical tests? Does the report • contain sufficient information for you to judge whether appropriate statistics were used? • 9. Was there an appropriate amount of statistical information reported? Are the findings clearly and logically • organized?

More Related