1 / 15

Teaching Registrars Research Methods Variable definition and quality control of measurements Prof. Rodney Ehrlich

Teaching Registrars Research Methods Variable definition and quality control of measurements Prof. Rodney Ehrlich. Learning objectives Scales of measurement Conceptual vs operational variables Precision, accuracy and validity of measurements: Understand Maximise Assess .

isi
Télécharger la présentation

Teaching Registrars Research Methods Variable definition and quality control of measurements Prof. Rodney Ehrlich

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Teaching Registrars Research Methods Variable definition and quality control of measurements Prof. Rodney Ehrlich

  2. Learning objectives • Scales of measurement • Conceptual vs operational variables • Precision, accuracy and validity of measurements: • Understand • Maximise • Assess

  3. Measurement scales • Categorical: • Nominal (no natural order), e.g.blood group • Ordinal, e.g.cancer staging • Continuous: • Discrete (counts),e.g. outpatient attendance • “True” continuous, e.g. haemoglobin

  4. Defining your variables Conceptual variable = everyday term, or alternatively, theoretical construct Operational variable = what is actually measured

  5. Defining your variables: examples • Renal function • Alcoholism • Risk taking behaviour • Chronic pain • Obesity

  6. Quality control of measurement “Measuring instrument” = questionnaire, laboratory test, clinical judgement. Precision = reliability, repeatability or reproducibility Accuracy = proximity to true value Validity = subset of accuracy

  7. Precision • Repetition between occasions, testers, instruments, gives same result. • Lack of reliability may also indicate an accuracy or validity problem but the two are separable, at least in theory. • Precision is not determinable in a single measurement.

  8. Example (dichotomous variable): precision of BP measurement First measurement. What is the precision (reliability) of the BP measurement?

  9. Measuring precision • Categorical • Percent agreement (concordance); • Kappa statistic (takes chance agreement into account) • Continuous • Various (see Hulley, Ch. 12) • (NOT correlation coefficients)

  10. Accuracy • Measurement agrees with another measurement accepted to be the truth, the so-called “gold standard”. Most intuitive for physical and physiological measurements • Validity • Where variable being measured is abstract, subjective, complex, etc., where gold standard debatable or not available

  11. Types of validity • Face validity • Measurement, or question, makes sense to you, interviewers, experts, subjects, et al. • Construct validity • Measurement agrees with other operational measurements of the same concept. • Example: depression • Criterion validity • Measurement agrees with a “gold standard”.

  12. Measuring criterion validity • Categorical • Sensitivity: proportion of true positives testing positive on the instrument • Specificity: proportion of true negatives testing negative on the instrument • Continuous • More complex, but often involves choosing cutpoints, i.e. categorising as positive/negative

  13. Example of criterion validity of recall (categorical) Chicken pox antibodies in blood? Sensitivity, specificity; Implications?

  14. Lack of precision/reliability • = “random error” • Descriptive study e.g. prevalence of hypertension? • Wider confidence interval (reduced power); • Need greater sample size (or repeated measurements) for same power • Comparative study e.g. does nurse home visiting improve hypertension control? • Same as for descriptive

  15. Lack of accuracy/validity • = “systematic error” • Descriptive study • Biased estimate; • Cannot remove by increasing sample size. • Comparative study • If affects both groups equally, will mask a true difference or association; • If affects the two groups differently, could mask true difference or create a spurious difference or association (e.g. recall bias).

More Related