1 / 102

Special Topics in Single-Case Design Intervention Research Tom Kratochwill

This presentation by Tom Kratochwill covers general assessment issues, intervention integrity monitoring, cost-analysis, and social validity assessment in single-case design intervention research.

mcculley
Télécharger la présentation

Special Topics in Single-Case Design Intervention Research Tom Kratochwill

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Special Topics in Single-Case Design Intervention ResearchTom Kratochwill

  2. Goals of the Presentation • Review general assessment issues in single-case design research; • Feature the importance of intervention integrity and intensity monitoring and promotion; • Review some cost-analysis issues in single-case design research; • Discuss features and challenges of social validity assessment in intervention research.

  3. General Considerations in Assessment/Measurement

  4. Choice of the Dependent Variable • A major consideration in single-case design intervention research is that dependent variable assessment must be repeated across time; • Traditional norm-referenced measurement does not easily lend itself to repeated assessment (sometimes called indirect assessment; e.g., self-report checklists and rating scales, informant report checklists and rating scales); • Standardized instruments used for normative comparisons may not provide accurate data (e.g., checklists and rating scales; Reid & Maag, 1994). Reid, R., &Maag, J. W. (1994). How many fidgets in a pretty much: A critique of behavior rating scales for identifying students with ADHD. Journal of School Psychology, 32, 339-354. • Specification of the conditions of measurement should be noted (e.g., natural versus analogue, human observes versus automated recording, etc.). Shapiro, E. S., & Kratochwill, T. R. (Eds.) (2000). Conducting school-based assessments of child and adolescent behavior. New York: Guilford.

  5. Measurement Systems Commonly Used in Single-Case Design Research • Automated Recording (e.g., galvanic skin response, heart rate, blood pressure); • Permanent Products (e.g., academic responses in math, reading, spelling, number of clothes items on the floor); • Direct Observation (e.g., assessment of behavior as it occurs as noted by human observers, video recording, computer-based/phone-based apps or systems).

  6. Quality of Assessment • Quality of assessment is typically determined by reliability and validity of measurement(now typically referred to as “evidence-based assessment”). • Some clarification of terms for reliability: • Reliability of effect (usually established through replication of the intervention in the same or repeated investigation); • Reliability of the procedures (sometimes referred to as intervention fidelity or integrity). • Reliability of measurement (usually determined by assessment agreement measures and required in the WWC Pilot Standards and other guidelines*);

  7. Remember WWC Standard 2: Inter-Assessor Agreement • Each Outcome Variable for Each Case Must be Measured Systematically by More than One Assessor. • Researcher Needs to Collect Inter-Assessor Agreement: • In each phase • On at least 20% of the data points in each condition (i.e., baseline, intervention) • Rate of Agreement Must Meet Minimum Thresholds: • (e.g., 80% agreement or Cohen’s kappa of 0.60) • If No Outcomes Meet These Criteria, Study Does Not Meet Design Standards.

  8. Observer Agreement in the WWC PilotStandards • The WWC Panel not specify the method of observer agreement (e.g., using only measures that control for chance agreement). • Hartmann, Barrios, and Wood (2004) reported over 20 measures to assess interobserver agreement. • A good practice is to report measures that control for chance agreement and report agreement for occurrences and non-occurrences of the dependent variable. • Good to assess accuracy: the degree to which observer data reflect actual performance or some true measure of behavior (e.g., video performance of the behavior).

  9. Artifact, Bias, and Complexity of Assessment • Researcher needs to consider factors that can obscure observer agreement: • Direct assessment by observers can be reactive; • Observers can drift in their assessment of the dependent variable; • Observers can have expectancies for improvement that are conveyed by the researcher; • Coding systems can be too complex and lead to error.

  10. Clinical Trials and Tribulations: Measuring Outcomes in Intervention Research on Selective Mutism • Child outcomes in anxiety disorder research • The Three Response Systems: Motor/Cognitive/Psychophysiological (theory-based) • The gold standard in treatment research on children’s fears and phobias (Morris & Kratochwill, 1985). • The standard in treatment of selective mutism was child speech (Kratochwill, 1981). Morris, R. J., & Kratochwill, T. R. (1985).Treating children’s fears and phobias: A behavioral approach. New York: Pergamon Press. Kratochwill, T. R. (1981). Selective mutism: Implications for research and treatment. Hillsdale NJ: Erlbaum

  11. Selective Mutism • Motor • Speech (with select persons) • Speech prompted-analogue • Speech natural-initiated • Social Engagement • Parent Outcomes • Change in Parent Response to the Child • Teacher Outcomes • Change in Teacher Response to the Child

  12. Selective Mutism • Cognitive • Thoughts of avoidance • Thinking something bad is going to happen • Negative self-talk about school, home, friends, etc.

  13. Selective Mutism • Psychophysiological • Arousal as reflected in GSR • Heart Rate • Blood Pressure

  14. Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report Mark Appelbaum University of California, San Diego, Harris Cooper Duke University, Rex B. Kline Concordia University, Montréal Evan Mayo-Wilson Johns Hopkins University, Arthur M. Nezu Drexel University, Stephen M. Rao Cleveland Clinic, Cleveland, Ohio

  15. Measures and covariates • Define all primary and secondary measures and covariates, including measures collected but not included in this report. Data collection • Describe methods used to collect data. Quality of measurements • Describe methods used to enhance the quality of measurements, including • Training and reliability of data collectors • Use of multiple observations

  16. Instrumentation • Provide information on validated or ad hoc instruments created for individual studies, for example, psychometric and biometric properties. Masking • Report whether participants, those administering the experimental manipulations, and those assessing the outcomes were aware of condition assignments. • If masking took place, provide statement regarding how it was accomplished and if and how the success of masking was evaluated

  17. Psychometrics • Estimate and report values of reliability coefficients for the scores analyzed (i.e., the researcher’s sample), if possible. Provide estimates of convergent and discriminant validity where relevant. • Report estimates related to the reliability of measures, including • Interrater reliability for subjectively scored measures and ratings • Test–retest coefficients in longitudinal studies in which the retest interval corresponds to the measurement schedule used in the study • Internal consistency coefficients for composite scales in which these indices are appropriate for understanding the nature of the instruments being employed in the study • Report the basic demographic characteristics of other samples if reporting reliability or validity coefficients from those sample(s), such as those described in test manuals or in the norming information about the instrument

  18. Reference: Applebaum, M., Klien, R. B., Cooper, H., & Mayo-Wilson, E. (2018). Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report. American Psychologist, 73(1), 3-25. DOI: 10.1037/amp0000191

  19. Follow-up Assessment • Some Considerations in Follow-up Assessment: • Importance of Follow-up Assessment; • Distinction Between Follow-up and Generalization; • Intervention “on or off” During Follow-up Interval; • Type of Follow-up Assessment Measures; • Length of Follow-up Assessment.

  20. Importance of Follow-up Assessment • Adds to Credibility of the Intervention Research; • Provides the Researcher with Information on the Durability of the Intervention; • May Provide Information to the Researcher on the Generalized Effects of the Intervention; • May be required in the RFP or journal.

  21. Distinction Between Follow-up and Generalization Assessment • Follow-up is an Optional Assessment Process and can be used to Assess Generalization; • Generalization can be Measured in Stimulus or Response Modes; • A Useful Conceptual Tool for the Types of Generalization Assessment that can be Measured is the “Generalization Map.” [Allen, J. S., Tarnowski, K. J., Simonian, S., Elliott, D., & Drabman, R. S. (1991)The generalization map revisited: Assessment of generalized effects in child and adolescent behavior therapy. Behavior Therapy, 22 (3), 393-405. https://doi.org/10.1016/S0005-7894(05)80373-9 Drabman, R. S., Hammer, D„ & Rosenbaum, M. S. (1979). Assessing generalization in behavior modification with children: The generalization map. Behavioral Assessment, 1, 203-219]

  22. Intervention “on or off” During Follow-up Interval • A Major Issue that the Researcher Must Consider in Follow-up Assessment is Whether the Intervention will be Continued or Discontinued During the Follow-up Interval. For example: • Intervention “on” in original or modified fashion; • Intervention “off” but re-established if needed; • Intervention “off” based on pre-established criteria.

  23. Type of Follow-up Assessment MeasuresConsider the following options: • The researcher may adoptthe original assessment protocol used during the intervention trial; • The researcher may adopt an abbreviated form of the original assessment (e.g., direct observation but in a short time period); • The researcher may adopt an alternative form of the original assessment (e.g., use self-report or checklist and rating scales rather than direct observation).

  24. Length of Follow-up Assessment • Factors to consider include the following: • Nature of the problem/issue under consideration; • Importance of follow-up for the problem/issue under consideration; • Recommendations from prior research/researchers in the area of the intervention/problem under consideration; • Policy of the funding agency or journal for follow-up assessment; • Practical and logistical issues surrounding the investigation (e.g., availability of the participants, cost, research staff).

  25. Intervention Integrity

  26. Intervention Integrity: A Developing Standard Assessing (and especially promoting) intervention or treatment Integrity is a more recent and developing consideration in intervention research (see Sanetti&Kratochwill, 2014). Sanetti, L. M., & Kratochwill, T. R. (Eds.) (2014). Treatment Integrity: A Foundation for Evidence-Based Practice in Applied Psychology. Washington, DC: American Psychological Association.

  27. Representative Features of Intervention Integrity Data • Across Design Intervention Phases • Across Therapists/Intervention Agents • Across Situations • Across Sessions • Across Cases

  28. Where does intervention integrity fit? 1. Screening/ assessment data suggest prevention/intervention is warranted 2. Evidence-based intervention selected & implemented 3. Program outcomes (SO) assessed 3. Intervention integrity (II) assessed Data reviewed 5a. Continue intervention + SO + II 4. Data-based decisions 5b. Continue intervention / promote intervention integrity + SO - II -SO - II 5c. Implement intervention strategies to promote intervention integrity - SO + II 5d. Change intervention

  29. Treatment Integrity: Definition “Fidelity of implementation is traditionally defined as the determination of how well an intervention is implemented in comparison with the original program design during an efficacy and/or effectiveness study” (O’Donnell, 2008, p.33) Five distinct dimensions of intervention integrity were proposed: adherence, exposure, quality of delivery, participant responsiveness, and program differentiation (Dane & Schneider, 1998) Treatment integrity encompasses three different aspects: treatment adherence, therapist competence, and treatment differentiation (Waltz, Addis, Koerner, &Jacobson, 1993). The degree to which treatment is delivered as intended (Yeaton & Sechrest, 1981) Treatment integrity is the extent to which required intervention components are delivered as designed in a competent manner while proscribed procedures are avoided by an interventionist trained to deliver the intervention in a particular setting to a particular population (Perepletchikova, 2014). Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions (Sanetti & Kratochwill, 2009) “The degree to which an intervention program is implemented as planned’’ (Gresham et al., 1993, p. 254). The ‘‘extent to which patients follow the instructions that are given to them for prescribed treatments’’ (Haynes et al., 2002, p. 2). “Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions” (Fixsen et al. 2005, p.5).

  30. Intervention Integrity in Research[Sanetti, L. M. H., Gritter, K. L., Dobey, L. M. (2011). Treatment integrity of interventions with children in the school psychology literature from 1995 to 2008. School Psychology Review, 40, 72-84]. Intervention Integrity Data Definition of Intervention

  31. Intervention Integrity Assessment in School Psychology Practice [Cochran, W. S., & Laux, J. M. (2008). A survey investigating school psychologists' measurement of treatment integrity in school‐based interventions and their beliefs about its importance. Psychology in the Schools, 45, 499-507. https://doi.org/10.1002/pits.20319 ]

  32. Implementation Supports Negative reinforcement Intervention manual Test driving interventions Expert consultation Intervention scripts Video support Intervention choice Performance feedback Implementation planning Classroom Check Up Motivational interviewing Participant modeling Instructional coaching Treatment planning protocol Role play Prompts Collaborative consultation Self-monitoring Direct training Sanetti & Collier-Meek (2014)

  33. Perspectives on intervention integrity Performance Feedback (Noell et al., 1997, 2002, 2005) Negative Reinforcement (DiGennaro et al., 2005, 2007) Behavior Performance Deficit Email/ text/skype cues

  34. Psychology Contributions from related fields Education Intervention integrity may be more complicated than a behavior performance deficit.

  35. Shifting conceptualization of treatment integrity Prevention/Intervention programs require implementers to change their behavior Implementing interventions with a high level ofintegrity can be considered an adult behavior change process.

  36. Focus on the HAPA Looked for a theory that… • Is conceptually clear and consistent, • Is parsimonious but would enable us to describe, explain, and predict behavior change. • Explicates behavior performance, not just development of behavioral intention. • Enabled us to design effective interventions that would produce change in the predicted behaviors. • Has empirical support and practical utility.

  37. Empirical Support Healthy eating Exercise Behaviors Breast self-examination Seat belt use Dental flossing See Schwarzer, R.( 2008) Modeling health behavior change: How to predict and modify the adoption and maintenance of health behaviors. Applied Psychology, 57, 1-29. https://doi.org/10.1111/j.1464-0597.2007.00325.x

  38. Project PRIME Planning Realistic intervention Implementation and Maintenance by Educators www.implementationscience.uconn.edu Funded by the Institute for Education Sciences, U.S. Department of Education (R324A100051)

  39. Project PRIME • PRIME was designed originally to prevent teachers’ level of intervention implementation from declining. • Developed a system of supports to facilitate teachers’ intervention implementation. • Delivered through a problem-solving consultation model (Bergan & Kratochwill, 1990; Kratochwill & Bergan, 1990).

  40. Three Components of PRIME: • Implementation Planning: A detailed guide walks consultants and consultees through the process of • Action planning • Coping planning

  41. Action Planning • Specifies how and under what circumstances an intended intervention action is to be completed Intention to implement Initiation of implementation

  42. Action Planning Continued… • Completion of Action Plan helps to • Define the intervention components and steps • All aspects of the intervention are accounted for by a component (e.g., a social skills lesson) • Each step corresponds to behavior(s) performed to implement an intervention component (e.g., introducing the topic of the social skills lesson) • Detail logistical planning on implementation • Identify potential resource barriers

  43. Action Planning Continued… • Logistical planning questions answered for each implementation step include: • Where will the step be implemented? • How often will the step be implemented? • For how long will the step be implemented? • What resources (materials, time, space, personnel, etc.) are needed to complete this step? • Throughout implementation Action Plan can be updated to reflect changes in implementation

  44. Coping Planning • Completion of Coping Plan helps to: • Identify up to 4 of the most significant barriers to intervention implementation • Develop “coping” strategies to address identified barriers • Barriers to implementation are listed • e.g., major changes in class schedule due to statewide testing • Throughout implementation the coping plan is updated to reflect changes in implementation

  45. Three Components of PRIME: 2. Implementation Beliefs Assessment (IBA) Assesses implementer’s • Outcome expectancies • Self-efficacy • Implementation • Maintenance • Recovery

  46. Three Components of PRIME: 3. Strategies to Increase Implementation Intention and Self-efficacy Eight strategies have been identified. Detailed guides walk consultants through strategy implementation. Strategies include: • Participant modeling • Role play

  47. Multi-Tiered Implementation Supports (MTIS) Intensive strategies that typically require ongoing support Selected strategies based on the implementer’s intervention integrity and stage of learning Feasible and widely relevant implementation support strategies that can be easily embedded into typical programs

  48. MTIS: Tier 1 Strategies • Intervention manual • Intervention scripts • Collaborative/Expert consultation • Instructional coaching • Intervention choice • Test driving interventions • Direct training

More Related