1 / 74

Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports

Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports. Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii, March 15, 2005. Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports.

arvid
Télécharger la présentation

Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quality of Forensic Reports:An Empirical Investigation of Three Panel Reports Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii, March 15, 2005.

  2. Quality of Forensic Reports:An Empirical Investigation of Three Panel Reports Marvin W. Acklin Ph.D., Department of Psychiatry, JABSOM Reneau C. Kennedy Ed. D. Adult Mental Health Division Richard Robinson, Argosy University Bria Dunkin, Argosy University Joshua Dwire M.S., Argosy University Brian Lees, Argosy University

  3. Quality of Forensic Reports:An Empirical Investigation of Three Panel Reports • With special assistance from: • The Honorable Marsha J. Waldorf • Judge Waldorf’s Law Clerks • Teresa Morrison • Kirsha Durante • Crystal Mueller, AMHD, Forensic Services Research

  4. Logic • “Studies have uniformly concluded that judges typically defer to the opinions of examiners, with rates of examiner-judge agreement often exceeding 90%. Judges typically rely solely on examiners’ written reports and, hence, the quality of the data and reasoning presented in such reports become a critical part of the CST adjudication process” (Skeem & Golding, 1988, p.357).

  5. Method • We utilized the rationale of Skeem and Golding (Skeem, Golding, Cohn, & Berge, 1998; Skeem & Golding, 1998). We examined factors which reflect the quality of forensic reports. Skeem and Golding identified a number of factors linked to the quality of forensic reports: methods, opinions, and rationale for forensic opinions.

  6. Purpose • To examine a representative sample of three panel reports submitted to the 1st Circuit Court Judiciary. • To assess factors related to quality and reliability.

  7. Method: Study 1 • Sample • We examined 50 felony cases adjudicated in the 1st Circuit Court. • Inclusion criteria included a full set of three panel reports and a finding regarding fitness by the court. • The 50 cases were drawn at random from 416 files stored at the 1st Circuit Court.

  8. Method: Studies 2 & 3 • Utilizing the same selection procedure as the larger study, we examined two subsets of the 416 files. • Three panel examinations prior to a finding of Not Guilty by Reason of Insanity (NGRI: n = 10). • Three panel examinations ordered after a request for Conditional Release (CR: n = 10).

  9. Method: Study 4 • Records from Prosecuting Attorney City & County of Honolulu for 2000-2004

  10. Procedures: Inter-rater Agreement • A coding manual of pertinent items was created and the terminology utilized was refined. • Pre-coding trainings were conducted utilizing three reports to refine the understanding of items in coding manual and to familiarize coders with the formats of the forensic reports. • An inter-rater reliability trial (IRRT) was conducted using five files (15 reports). • The results of the first IRRT were:

  11. Procedure: Inter-rater Agreement • Results of the 1st IRRT were: • Mean kappa .80 • Range .13 to 1.0 kappa range Examiner 1,2 .87 .18 to 1.0 Examiner 1,3 .77 .13 to 1.0 Examiner 1,4 .86 .18 to 1.0 Examiner 2,3 .74 .26 to 1.0 Examiner 2,4 .84 .18 to 1.0 Examiner 3,4 .72 .13 to 1.0

  12. Procedure: Inter-rater Agreement • A second inter-rater reliability trial was conducted (IRRT) using five files (15 reports) to refine coding criteria. • The results of the second IRRT were as follows: • Mean kappa .95 • Range .55 to 1.0

  13. Inter-rater Agreement Coefficients (Second IRRT)5 Cases, 3 Raters

  14. Study 1: Three Panel Evaluations CST/CResp/Danger

  15. Number of Different Examiners • 32 Examiners

  16. Credential of Examiner N = 416 cases n = 150 reports

  17. Classification of Criminal Offence N = 416 cases n = 150 reports

  18. Case Caption Visible? N = 416 cases n = 150 reports

  19. Charge Visible? N = 416 cases n = 149 reports

  20. Examiner Opinion on Competency to Stand Trial (CST) N = 416 cases n = 150 reports

  21. Examiner Rationale for CST Opinion N = 416 cases n = 150 reports

  22. Mention of Specific Impairment in Relation to CST Opinion N = 416 cases n = 150 reports

  23. Examiner Opinion Concerning Criminal Responsibility N = 416 cases n = 150 reports

  24. Examiner Rationale for Criminal Responsibility Opinion N = 416 cases n = 150 reports

  25. Examiner Opinion Regarding Dangerousness N = 416 cases n = 150 reports

  26. Examiner Rationale for Dangerousness Opinion N = 416 cases n = 150 reports

  27. Examiner Suggestions for Managing Dangerousness/Risk Reduction N = 416 cases n = 150 reports

  28. Evaluation Methods N = 416 cases n = 150 reports

  29. Judicial Determination of CST N = 416 cases n = 150 reports

  30. Ease of Data Extraction N = 416 cases n = 149 reports

  31. Majority/Unanimity Agreement • CST • 58% = 100% agreement • 32% = At least 2 examiners & judge agree • 4% = At least 1 examiner & judge agree • 6% = Examiners agree, Judicial determination differs

  32. Inter-examiner Agreement • CST • Mean kappa .42 • Range .36 to .55 • Responsibility • Mean kappa .41 • Range .39 to .45 • Dangerousness • Mean kappa .25 • Range .14 to .36

  33. Judge-Examiner Agreement • CST • Mean kappa .49 • Range .39 to .60 • Dangerousness • Cases where examiner opinion on dangerousness was not ordered were excluded from calculation. • Judicial Determination on all cases was “No Determination” • Mean kappa .05 • Range .05 to .10

  34. Summary of Findings and Recommendations: Study 1

  35. Study 2: Three Panel Evaluations Prior to NGRI

  36. Number of Different Examiners • 18 Examiners

  37. Credential of Examiner N = 416 cases n = 30 reports

  38. Classification of Criminal Offence N = 416 cases n = 30 reports

  39. Examiner Opinion Concerning Criminal Responsibility N = 416 cases n = 30 reports

  40. Examiner Rationale for Criminal Responsibility Opinion N = 416 cases n = 30 reports

  41. Examiner Opinion Regarding Dangerousness N = 416 cases n = 30 reports

  42. Examiner Rationale for Dangerousness Opinion N = 416 cases n = 30 reports

  43. Examiner Suggestions for Managing Dangerousness/Risk Reduction N = 416 cases n = 30 reports

  44. Evaluation Methods N = 416 cases n = 30 reports

  45. Majority/Unanimity Agreement • CST • 70% = 100% agreement • 30% = At least 2 examiners and judge agree • Dangerousness • 20% = 100% agreement • 30% = At least 2 examiners and judge agree • 40% = At least 1 examiner and judge agrees • 10% = Two examiners agree, judge and 3rd examiner opinion differed completely

  46. Inter-examiner Agreement • CST • Mean kappa .61 • Range .46 to .76 • Dangerousness • Cases where opinion on dangerousness was not asked for were not calculated • Mean kappa .17 • Range -.11 to .40

  47. Judge-Examiner Agreement • CST • Mean kappa .79 • Range .60 to 1.0 • Dangerousness • Mean kappa .24 • Range .15 to .24

  48. Summary of Findings and Recommendations: Study 2

  49. Study 3: Three Panel Evaluations Prior to CR

  50. Number of Different Examiners • 16 Examiners

More Related