460 likes | 752 Vues
Every epidemiological study should be viewed as a measurement exerciseKenneth J. Rothman, 2002. . ?.. in order to understand the truth . What epidemiologists ?measure". Rates, risksEffect measuresRate RatioOdds ratio ....... yet these are just estimates of the ?true? val
E N D
1. Bias
2. Every epidemiological study should be viewed as a measurement exercise
Kenneth J. Rothman, 2002
3. What epidemiologists measure Rates, risks
Effect measures
Rate Ratio
Odds ratio
....... yet these are just estimates of the true value
the amount of error cannot be determined
4. Objective of this session Define bias
Present types of bias and influence on estimates in our studies
Identify methods to prevent bias
5. Should I believe the estimated effect?
6. Errors (a review) Two broad types of error
Random error: variability in our data that we cannot easily explain
Chance?
Systematic error (Bias)
8. Errors in epidemiological studies
9. Categories of bias Selection bias
Information bias
[Confounding]
10. Selection bias Errors in selecting the study population
When ?
Inclusion in the study
How ?
Preferential selection of subjects related to their
Disease status cohort
Exposure status case control
11. Selection bias When?
How?
Consequences?
12. Types of selection bias Sampling bias
Ascertainment bias
surveillance
referral, admission
Diagnostic
Participation bias
self-selection (volunteerism)
non-response, refusal
healthy worker effect, survival
13. Selection bias in case-control studies
14. Selection bias How representative are hospitalised trauma patients of the population which gave rise to the cases?
15. Selection bias Higher proportion of controls drinking alcohol in trauma ward
than in non-trauma
16. SB: Diagnostic bias OC use ? breakthrough bleeding ? increased chance of detecting uterine cancer
17. Prof. Pulmo, head respiratory department, 145 publications on asbestos/lung cancer SB: Admission bias
18. SB: Survival bias Contact with risk hospital leads to rapid death
19. SB: Non-response bias Controls chosen among women at home: 13000 homes contacted ?1060 controls
20. Selection bias in cohort studies
21. SB: Healthy worker effect
22. Healthy worker effect
23. Non-response bias
24. SB: Non-response bias
25. Non-response bias
26. SB: Loss to follow-up Difference in completeness of follow-up between comparison groups
e.g. study of disease risk in migrants
27. Minimising selection bias Clear definition of study population
Explicit case and control definitions
Cases and controls from same population
Selection independent of exposure
Selection of exposed and non-exposed without knowing disease status
28. Categories of bias Selection bias
Information bias
29. Information bias Systematic error in the measurement of information on exposure or outcome
When?
During data collection
How?
Differences in accuracy
of exposure data between cases and controls
of outcome data between exposed and unexposed
30. Information bias When?
How?
Consequences?
31. Information bias: misclassification Measurement error leads to assigning wrong exposure or outcome category
32. Nondifferential misclassification Misclassification does not depend on values of other variables
Exposure classification NOT related to disease status
Disease classification NOT related to exposure status
Consequence
if there is an association,
weakening of measure of association
bias towards the null
33. Nondifferential misclassification Cohort study: Alcohol ? laryngeal cancer
34. Two main types of information bias Reporting bias
Recall bias
Prevarication
Observer bias
Interviewer bias
Biased follow-up
35. Mothers of children with malformations remember past exposures better than mothers with healthy children IB: Recall bias
36. IB: Prevarication bias Relatives of dead elderly may deny isolation
37. Investigator may probe listeriosis cases about consumption of soft cheese (knows hypothesis) IB: Interviewer bias
38. IB: Biased follow-up Unexposed less likely diagnosed
for disease than exposed
39. Minimising information bias Standardise measurement instruments
Administer instruments equally to
cases and controls
exposed / unexposed
Use multiple sources of information
questionnaires
direct measurements
registries
case records
Cross reference information sources!!!
Use multiple controls
40. Questionnaire Favour closed, precise questions; minimise open-ended questions
Seek information on hypothesis through different questions
Disguise questions on hypothesis in range of unrelated questions
Field test and refine
Standardise interviewers technique through training with questionnaire
41. Bias Should be prevented !!!!
At PROTOCOL stage
Difficult to correct for bias at analysis stage
42. References
43. Last word Scepticism is the chastity of the intellect
Dont give it away to the first attractive hypothesis that comes along
(MB Gregg)
44. Bias in randomised controlled trials Gold-standard: randomised, placebo-controlled, double-blinded study
Least biased
Exposure randomly allocated to subjects - minimises selection bias
Masking of exposure status in subjects and study staff - minimises information bias
45. Bias in prospective cohort studies Loss to follow up
The major source of bias in cohort studies
Assume that all do / do not develop outcome?
Ascertainment and interviewer bias
Some concern: Knowing exposure may influence how outcome determined
Non-response, refusals
Little concern: Bias arises only if related to both exposure and outcome
Recall bias
No problem: Exposure determined at time of enrolment
46. Bias in retrospective cohort & case-control studies Ascertainment bias, participation bias, interviewer bias
Exposure and disease have already occurred ? differential selection / interviewing of compared groups possible
Recall bias
Cases (or ill) may remember exposures differently than controls (or healthy)