1 / 36

The Critical Review Paper

The Critical Review Paper. Dr. Mahmoud Awara MRCPsych, MSc, DPM, DPP, MS (Internal Medicine). The basic knowledge required to critically examining research papers. Are the results reliable and valid? Are they clinically important?

aisling
Télécharger la présentation

The Critical Review Paper

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Critical Review Paper Dr. Mahmoud Awara MRCPsych, MSc, DPM, DPP, MS (Internal Medicine)

  2. The basic knowledge required to critically examining research papers • Are the results reliable and valid? • Are they clinically important? • Do errors in design and methodology make the conclusions invalid? • Whether further research is needed? • Is this research relevant to my clinical practice?

  3. The Basic Study Designs

  4. Which study design answers which clinical question most reliably?1. Diagnosis How useful is the MMSE in detecting demented patients? A Cross Sectional study ; comparing the proportion of patients who really have the disorder who have a positive test with the proportion of patients who don’t have the disorder but have a positive test result.

  5. 2.Aetiology • What caused my patients’ disorder? • What are the chances that the intervention which I am going to use will have a specific adverse effect? Studies which compare the frequency of an exposure in a group of people with the disease of interest (cases) with a group of people without the disease (controls); • A RCT • A case –control study • A Cohort Study.

  6. 3. Therapeutics Clinical Questions • How do I select a particular treatment? • Is this treatment better than placebo? • Can I improve my service by introducing a new model of care? A Randomised Controlled Study. A systematic Review or a meta-analysis of numerous such studies

  7. 4.Prognosis • What is likely outcome of my patient’s illness? A Cohort Study

  8. 5.Cost Effectiveness • Economic Analysis

  9. 6. Planning Services Is it worth setting up a service for ethnic minority, or a perinatal psychiatry service in my area? Cross sectional surveys which can be used to measure the prevalence of a disorder within a population.

  10. Bias • Selection Bias • Berkson bias; admission rate bias, when you examine a parasuicide behaviour & social support on the ward You will be more likely to find a link between poor social support and parasuicide as we tend to admit more frequently those with poor social support than those with good support. • Neyman bias; it creats a case group not representative of cases in the community, e.g examining the ownership to a lethal weapons (a gun) and suicide attempt in a hospital control study would miss those who successfully killed themselves with guns and never reached hospital. This would therefore greatly reduce the odds ratio of access to lethal weapons and suicide risk. • To minimise the selection bias we need to use control subjects who are very much similar to the studied subjects except in the exposure of interest.

  11. Bias • Information bias • Recall bias; (effort after meaning). To minimise, to interview relatives or to consult medical records. • Observer bias; when the interviewer is aware of which subjects have the illness of interest. To minimise, • Blinding • Self administered questionnaire • Interviewers unaware of the study hypotheses

  12. Confounding • Is a variable associated with both exposure and outcome but not on the causal pathway. • A positive confounding would produce a false association. • A negative confounding would obscure a true association.

  13. Adjusting for Confounders • In the design • Restriction (exclusion); you select only those who have the same value of the confounding variable. • Matching; the unexposed (control) subjects are deliberately selected to be similar to the index subjects in regards to any number of potential confounders. • Disadvantages; Recruitment process can be difficult, cannot examine the effect of a matched variable.

  14. Adjusting for Confounders • In the analysis • Multivariate Techniques, using logistic regression, which assumes a linear relationship between the logarithm of the odds of being a case and the exposures. This model can take account of a large number of variables simultaneously. • Stratification, where the relative risk is calculated within each level of the confounding variable and a summary statistic is calculated.

  15. Chance • The role of chance is a problem dealt with largely by statistical techniques. • Statistically significant p<0.05 means that such association could have arisen by chance in less than 5%. This means that this is unlikely to have occurred by chance. • The statistical power of a study gives the probability that a type II error will not occur, which depends on; • The strength of the expected association. • The prevalence of the exposure. • The significance level (usually 5%). • The sample size.

  16. Reverse Causality • It is a particular problem for descriptive and case control studies. • This simply means that an association between an exposure and a disease arises because the disease causes the exposure, rather than the vice versa. • Example; life events could be a cause or an effect of depression.

  17. Observational Studies • Analytic studies:- • Case- control studies and Cohort studies. • Descriptive studies:- • Case reports/series. • Audit projects. • Cross-sectional surveys. • Qualitative studies.

  18. Observational Studies • Case Control • Subjects with and without a disease are compared retrospectively on rate of exposure. • It can be used in studies addressing, Aetiology, and Diagnosis. • Advantages; suitable for rare disease, distant and multiple exposures, quick and inexpensive, relatively few subjects required. • Disadvantages; prone to bias & confounding, inefficient for rare exposures, temporal relationships can be difficult to establish.

  19. Observational Studies 2. Cohort studies • Exposed and the non-exposed subjects are compared prospectively on rate of disease. • Can be used to study Aetiology, Harm, and Prognosis. • Advantages; can evaluate rare exposures, temporal relationship and multiple outcomes. Also reduce bias. • Disadvantages; expensive and lengthy, unsuitable for rare disease, loss of follow-up threatens validity.

  20. Observational Studies 3. Clinical Audit • Is a systematic and critical analysis of the quality of medical care. • In contrast to a Research, the Audit measures what is actually happening – preferably against certain standards – attempts to improve usual practice, and then re-audit to close the audit loop/cycle. • It is the best measure of you own population but it can be resource intensive.

  21. Observational Studies 4. Qualitative • It is concerned with personal meanings, attitudes, experiences, feelings, values and other types of opinion. • Can study complex issues but it is difficult to plan the data collection and analysis and its subjectivity is difficult to compare.

  22. Observational Studies 5. Surveys • Surveys are generally cross-sectional studies of the prevalence and associations of a disorder. • If they are conducted twice, incidence and predictors may be studied. • It can be used to study diseases frequency and association which would help in planning services and generate hypotheses. • Cannot distinguish cause and effect, susceptible to bias and confounding; cannot evaluate timing of exposure.

  23. Observational Studies 6. Ecological study • IS a special type of survey, also known as a correlational study, where whole populations rather than individuals are studied, often using one or more computerized databases. • It measures associations of disease at a population level which do not necessarily hold at an individual level. • It describes populations rather than individuals, and is prone to incorrect conclusions about associations which is called ecological fallacy.

  24. Observational Studies 7. Case reports/series • They are simple descriptions of events in a single case or a number of cases. • They are prone to chance associations and all sorts of bias. • Case reports and series should not, however, be dismissed, as they can be very informative and influential; e.g Helicobacter Pylori infection and peptic ulcer.

  25. Experimental studies • Clinical trial. • Economic analysis. • Systematic review and meta-analysis of them.

  26. Clinical Trials • Open trials • All the subjects are given one treatment. • They are easy and cheap but there is no controls. 2. Controlled trials • Two treatments are compared. • Relatively straightforward but there is no randomisation.

  27. Clinical Trials 3.RCT • Randomisation, blinding, and intention to treat analysis. • It minimise bias and confounding. • Expensive, difficult, and time consuming. 4. Cluster trials • Groups of individuals are randomised. • Can establish the efficacy of various health services. • Often difficult to find enough groups to give power.

  28. Clinical Trials 5. Pragmatic trial • All patients in a location are randomised. • Representative and generalizable test of effectiveness. • Difficult to control, blind, and avoid excessive dropouts.

  29. Clinical Trials 6. Crossover trial • Subjects are their own controls. • Can study treatment of rare disorders. • Liable to carryover effects and order effects. 7.No-1 trial • A single subject, where two or more treatments are blindly given in succession to an individual patient. • Liable to carryover effects and order effects.

  30. Clinical Trials 8. Systematic Reviews • They are comprehensive reviews of all the studies in a given area, which have been identified using explicit criteria. • Meta-analysis is complementary to systemic reviewing, as it combines studies mathematically to provide a summary “best estimate” of any true effect. This is achieved by “weighting” studies according to size and/ or quality.

  31. Clinical Trials 8. Systematic Reviews • They are liable to publication bias, location bias, and inclusion bias. • Meta-analysis are unreliable if based on non-systematic reviews, they are influenced by the quality of the original trials. • Heterogeneity, where the results from different studies differ to a statistically significant extent, and the summary estimate is therefore unreliable.

  32. Economic analysis • Cost minimisation- in which only the inputs are considered, using the cheapest of two equally effective treatments “technical efficiency”. 2. Cost-benefit, where all the inputs and outputs are simply measured in monetary terms. The cost of drugs, staff and services against the cost of time off work with and without treatment.

  33. Economic analysis • Cost - effectiveness, in which costs are related to a clinical output measure, such as life years gained. Cannot compare different outcomes or even choose between interventions providing more benefit at greater cost. • Cost-utility, in which an output, such as the quality of life adjusted year, combines quantitative and qualitative information of the amount of life gained and the relative quality of that to individuals.

  34. Level of evidence • Meta-analysis of a RCTs. • Individual RCT. • Well designed non-randomized controlled studies. • Other well designed quasi-experimental studies. • Evidence from well designed non-experimental studies. • Evidence from expert committee reports and experience of respected authorities.

  35. Thank YouMahmoud Awara

More Related