480 likes | 488 Vues
This conference focuses on the importance of research in the field of policing, evaluating interventions and policies, and using scientific methods. The conference aims to provide knowledge and insights on evidence-based practices in law enforcement.
E N D
Oregon POP Conference - 2015 Research Methods (101) for Problem-Oriented Policing Greg Stewart, Portland Police Bureau Kris Henning, Portland State University
Outline • Why do we need research as a field? • Why should your agency do research on interventions/policies you implement? • What are the scientific methods we use to evaluate CJ interventions/policies?
What does this have to do with policing? How do we use research methods? Here is a hypothetical surveillance photograph of a vandalism to a boat
What does this have to do with policing? Would this be a valid throwdown to provide a witness?????
What does this have to do with policing? Would this be a valid throwdown to provide a witness?????
We already use “Research Methods” How do we use research methods? DUII Testing? Interrogation?
Some of the Things We Do Backfire Petrosino, A., Turpin-Petrosino, C., & Buehler, J. (2003). Scared straight and other juvenile awareness programs for preventing juvenile delinquency: A systematic review of the randomized experimental evidence. The Annals of the American Academy of Political and Social Science, 589(1), 41-62.
The Criminal Justice System Costs a Lot • We spend $228 billion a year on the U.S. criminal justice system – roughly $791 per person in 2007 • Ineffective strategies in criminal justice: • Diverts money from effective approaches • Increases professional pessimism about crime prevention • Decreases prestige of CJ in the eyes of community • May increase broader crime rate
Evidence-Based Practices - Medicine The development of scientific knowledge through experimentation into the causes, prevention, and treatment of medical disorders
Evidence-Based Practice – CJ Are Growing • Problem-Oriented Policing • Intelligence-led Policing • Situational Crime Prevention • Evidence-based Corrections
Can You Trust “New Crime Solutions”? • How many times have you been approached by a vendor, citizen, city council person with a “new” approach to public safety problem? • Scientific research provides a method for evaluating the evidence for (or against) practices • Measurement of outcomes (accuracy & reliability) • Sample size & statistics (effect size) • Sampling (generalizability) • Study design (cause & effect)
Why Should Your Agency Do Research? • How many of you have the following problems in your city? • Social/physical disorder in city parks • Recurring contacts with people suffering from MI • Speeding, running red lights • Have any of you found something that helped in your city? • Share this strategy with others • Help build body of evidence for our field
Different Kinds of Research (Questions) • Descriptive studies • What time of day accounts for the majority of residential burglaries? • Among officers injured at work, were the majority injured as a result of suspect behaviors or accidents? • Explanatory studies • Does being a victim of crime (vs. non-victims) increase or decrease satisfaction with the police? • Are police officers better than lay people at detecting false confessions?
Different Kinds of Research (Questions) • Evaluation studies – Interventions/programs • Will crime decrease in “hot spots” if we add four additional patrols per day? • Do red light cameras decrease accidents at intersections? • Evaluation studies – Policies • Does restricting access to cans of spray paint through a new city ordinance result in less graffiti? • If sentences for drunk driving are doubled by the legislature will this reduce alcohol-involved crashes?
Evaluation Research • Impact evaluation • Were changes observed in targeted problem after intervention/policy was implemented? • Is intervention/policy cause of the change? • Process evaluation • Was intervention/policy implemented as planned? • How did deviations from implementation plan impact findings?
Evaluation Research Impact Evaluation No Change or Neg. Change Positive Change Implemented as planned Process Evaluation Not implemented as planned
Step 1. Measurement of Outcomes • Direct observables as outcome variables • Calls for service • Criminal incident reports • Constructs • Social disorder • Collective efficacy • Mental illness • Police legitimacy Need to define & develop ways to measure these
Measurement Quality • Reliability • Does measure/scale produce consistent scores when consistency should be expected? • Inter-rater, test-retest • Validity • Does measure/scale produce accurate scores? • Face, concurrent, predictive
Reliability Test Suspect’s Hair Style (check one) • Afro/natural • Bald/balding Bushy Curly Fade • Flat-top/military • Greasy • Straight • Jheri curl • Wig • Ponytail • Wavy • Processed
Reliability Test Suspect’s hair style (check one) • Afro/natural • Bald/balding Bushy Curly Fade • Flat-top/military • Greasy • Straight • Jheri curl • Wig • Ponytail • Wavy • Processed
Reliability Test Suspect’s hair style (check one) • Afro/natural • Bald/balding Bushy Curly Fade • Flat-top/military • Greasy • Straight • Jheri curl • Wig • Ponytail • Wavy • Processed
Reliability Test Suspect’s hair style (check one) • Afro/natural • Bald/balding Bushy Curly Fade • Flat-top/military • Greasy • Straight • Jheri curl • Wig • Ponytail • Wavy • Processed
Reliability Test Suspect’s hair style (check one) • Afro/natural • Bald/balding Bushy Curly Fade • Flat-top/military • Greasy • Straight • Jheri curl • Wig • Ponytail • Wavy • Processed
Inter-rater Reliability • Inter Rater Reliability (IRR) in police officers’ coding of mental illness Officer # 1 Officer # 2 Mentally ill Mentally ill Not MI Not MI Not MI Not MI Mentally ill Not MI
Measurement Validity • Does the measure/scale obtain valid results? • Are the scores accurate? • More difficult to establish than reliability • Reliability sets limit for validity Not Reliable or Accurate Reliable but not Accurate Reliable and Accurate
Concurrent Validity • Does the new scale produce similar scores as another measure of the same thing (ideally a “gold standard”)? Officer’s Assessment Jail Screen by QMHP Mentally ill Mentally ill Mentally ill Not MI Not MI Not MI Mentally ill Mentally ill
New Crime Prevention Initiative • Residential burglary up 20% in your city • High economic cost • Emotional impact of victimization • Literature review • High rate of repeat victimization • Highest during first month after 1st offense • New intervention • Send crime prevention officer to residence • Conduct burglary risk audit • Identify resources in community (e.g., locks program)
Step 1. Measurement of Outcomes • What could you measure (reliably) that would speak to program having an impact? • ??? • ???
Step 1. Measurement of Outcomes • Overall decrease in city’s residential burglaries • Decrease in households experiencing repeat victimization (within 1 month) • % residents reporting satisfaction with police response • # residents using city’s locks program • CPTED changes made by residents following burglary
Is the Apparent Reduction “Big Enough”? -8% -25% -50% -37%
Is the Apparent Reduction “Big Enough”? • Statistical procedures can be used to determine whether difference observed might be due to: • Random fluctuation (unrelated to intervention) • Reliable change (possibly due to intervention)
Is the Program the Cause of the Change? -25% = statistically significant (AKA unlikely to be due to chance alone)
Is the Program the Cause of the Change? • To establish a “causal relationship” (i.e., burglary audits cause decrease in repeat victimization) you need: • Statistically significant relationship - between intervention & outcome • Temporal ordering – cause (program) comes before measure of the effect (% repeats) • Non-spuriousness – other explanations for change ruled out
Temporal Ordering – Cause < Effect? Audits Started >> Average = 12% Average = 16%
Non-Spuriousness (other explanations?) Audits Started >> Average = 16% Average = 12%
Non-Spuriousness (other explanations?) Average = 16% Average = 12% Is change result of audits or news report?
Interrupted Time Series w/ Comparison Precinct B (nothing) Precinct A (audits) Change probably not due to news report because it is available in both precincts
Other Possible Causes? Precinct B Precinct A
Random Assignment vs. By Precinct • Randomly assign victims to different groups/conditions • Creates equal groups at start* Random Assignment Assignment #1 * When sample sizes are sufficiently large
Random Assignment vs. By Precinct • Randomly assign victims to different groups/conditions • Creates equal groups at start* Random Assignment Assignment #2 * When sample sizes are sufficiently large
Experimental Design Repeats Audit Yes 12.3% Groups Equal From Start No 17.3% Time 1 Time 2 • Statistically significant relationship - between intervention & outcome • Temporal ordering – cause (audit yes/no) comes before the effect (measurement of repeats) • Non-spuriousness – other explanations for change ruled out
Confidence in Findings Low • Anecdotes / Opinions • Post Test • Pre-Post • Interrupted Time Series • Interrupted Time w/ Comparison • Randomized Experiment Very High Use best design possible given cost, time, ethics, etc.
Oregon POP Conference - 2015 Research Methods (101) for Problem-Oriented Policing Greg Stewart, Portland Police Bureau Kris Henning, Portland State University