1 / 11

Chapter: Nine

Chapter: Nine. VALIDITY, RELIABILITY, and TRIANGULATED STRATEGIES. Question. What are some reasons for the lack of validation in criminal justice research?. Validity, Reliability, Replication, and Triangulated Strategies. Validity = accuracy in findings (precedes reliability)

zena
Télécharger la présentation

Chapter: Nine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter: Nine VALIDITY, RELIABILITY, and TRIANGULATED STRATEGIES

  2. Question What are some reasons for the lack of validation in criminal justice research?

  3. Validity, Reliability, Replication, and Triangulated Strategies Validity = accuracy in findings (precedes reliability) Reliability = consistency and predictability of research Error = invalidity The goal is to minimize error because validity is never entirely proven. Triangulation is the logical method to assess validity. Replication is the logical method to assess reliability.

  4. Validity Does my measuring instrument in fact measure what it claims to measure? In other words, is it an accurate measure of the phenomenon under study?

  5. Types of Validity • Face validity: At face value – does the measuring instrument appear to be measuring what I am attempting to measure? Judgmental – nonempirical. • Content validity: Does the content of the instrument in question measure the concept in question? Judgmental – Judgmental and nonempirical, although subject to line analysis. • Construct validity (aka: concept validity): Does the instrument in fact measure the concept in question? For example, was a measurement obtained of something other than what the measurement was claimed to measure? Refers to the fit between theoretical and operational definitions of terms.

  6. Types of Validity(CONT’D) • Pragmatic validity: Asks “does it work?” when distinguishing the two types of pragmatic validity: concurrent validity (current status)– gauging present characteristics vs. predictive validity (future status) - forecasting). Both seek outside criteria to assess the accuracy of measurement, i.e., files of juveniles or criminal courts. • Convergent-discriminant validity/Triangulation: Convergence: techniques yield similar results using different methods; Discriminate: techniques yield different results using the same method to measure different concepts. Both involve using multiple methods to measure multiple traits, i.e., triangulation (using multiple methods to measure the same phenomenon).

  7. Questions What are pretests or pilot studies? In the long run, is validity ever entirely demonstrated or proven?

  8. Reliability Reliability is demonstrated through stable and consistent replication of findings on repeated measurement. If the study were repeated, would the instrument yield stable and uniform measures?

  9. Types of Reliability • Consistency of measurement is determined by whether the set of items used to measure some phenomenon are highly related and measure the same concept. If reliable, the set of items used to measure the concept will have a relatively high correlation with each other. • Stability (Predictability): the results will show the same score by the same respondent on the same question on second testing (excluding rival factors). Used to analyze events and behavior data.

  10. Methods Used to Assess Reliability • Test-retest: when the same instrument is administered at least twice to the same population - if results are the same, stability is assumed. Rival causal factors include pretest bias, reactivity, history, and maturation. • Multiple forms: do alternate forms of the instrument administered to the same group result in the same (stable) scores? – similar rival causal factor problems as the test-retest method. • Split-Half technique: when the same instrument is administered to one group at one time, – are separate halves of the instrument similar in response (are they consistent)? This technique removes the problem of testing effects (internal rival causal factor) – however, the question remains – are the two halves equivalent?

  11. Research Validation Examples • Drug Use Forecasting (DUF) or Arrestee Drug Abuse Monitoring Program (ADAM): self-report and urine testing. • Konecni (1979): compared bail and sentencing decisions using interviews, rating scale responses, observations of live hearings, archival analysis of sentencing files, and simulations of actual bail hearings. (Triangulation) • Shaw and McKay (1931): compared delinquency rates by mapping (ecological effects), in-depth case studies, interviews with parents, and clinical analysis. (Triangulation)

More Related