1 / 24

SOCW 671 # 8

SOCW 671 # 8. Single Subject/System Designs Intro to Sampling. Single-Subject Designs. Evaluation designs that involve arrangements in which repeated observations are taken before, during, and/or after an intervention.

ronan-huff
Télécharger la présentation

SOCW 671 # 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SOCW 671 # 8 Single Subject/System Designs Intro to Sampling

  2. Single-Subject Designs • Evaluation designs that involve arrangements in which repeated observations are taken before, during, and/or after an intervention. • These observations are compared to monitor the progress and assess the outcome of that service.

  3. Logic of Single Subject/System Designs • Unlike experimental designs that involve experimental and control groups, single system designs have one identified client/system • This identified client/system may be an individual or group • These designs are based on a time-series

  4. Use on the Micro Level of Social Work Practice • If you are practicing at the micro level, this likely will be the most common method to use. • Directly related to client progress

  5. Measurement Issues • Need to specify targets of intervention by having an operational definition of target behavior • Triangulation - the use of two or more indicators or measurement strategies when confronted with a multiplicity of measurement options • Self-report scales often used, these have plusses and minuses

  6. Unobtrusive Measurement Preferred • Will want to reduce bias and reactivity through the use of unobtrusive measurement (means observing and recording behavioral data in ways that by and large are not noticeable to the person being observed).

  7. First Need Baseline (control phase) Measures • Pattern should not reflect a trend of dramatic improvement to the degree that it suggests the problem is nearing resolution • Should have many measurement points • Chronologically graphed data should be stable

  8. Alternative Designs • AB • ABAB • Multiple Baseline & Successive Interventions • Multiple Component

  9. AB: Basic Single-Subject Design • Collect data during baseline period • Collect data during intervention • Problems is that it does not control well for history

  10. ABAB: Withdrawal/Reversal Design • Two problems • Improvement in target behavior may not be reversible even when intervention is withdrawn • Practitioner may be unwilling to withdraw something that appears to be working

  11. Multiple Baseline-Design (Successive Interventions) • Consists of several different interventions • The interventions are staggered. • Each intervention is applied one after another in separate phases. • The application of the intervention is provided to different target problems, settings, or individuals

  12. Multiple-Component Design • Combines elements of the experimental replication and successive intervention designs. • Can be used with or without baselines/ • Purpose is to compare the relative effectiveness of two different interventions • Problems with being able to infer that only one component resulted in target behavior

  13. Data Analysis • Two-standard deviation-band approach (Sheward Chart) • Chi-square • t-test & ANOVA

  14. Shewart Chart • Mean level of baseline data is identified • Two standard deviation levels (bands) are constructed above and below the mean line • These bands are extended into the intervention phase • If two successive observations during intervention, there is a significant change

  15. Complicating Factors • Carryover – occurs when the effects obtained in one phase appear to carry over into the next phase • Contrast – when the subject reacts to the difference in the two interventions or phases Order of presentation – when the order of the phases by themselves may be part of a causal impact • Incomplete data – when a subject of client does not “fit” nicely into the phase time frame • Training Phase – client may not have the prerequisite skills for full participation in the intervention when it begins

  16. Causality Criteria in Single Subject (System) Designs • temporal arrangement • co-presence of the intervention & desired change in target behavior • repeated co-presence of the intervention and the manifestations of the desired change • consistency over time • conceptually and practically grounded in scientific/professional knowledge.

  17. Design Validity & Reliability • Replication is very useful • Statistical Conclusion Validity: Did Change Occur? • Internal Validity: Was change Caused by Intervention? • Construct Validity: Was Intervention and Measurement of Outcomes Accurately Conducted?

  18. Intro to Sampling • Non-probability • Probability

  19. Non-probability • Reliance on available subjects • Quota sampling • Snowball sampling • Selecting informants

  20. Probability • Simple random • Systematic • Stratified • Cluster

  21. Issues in Program Evaluation • Evaluation as Representation • Program evaluation is not the program, only a snap shot of it • Organizations are complex, therefore evaluations often focus on select services • Evaluations can go beyond consumer focus, may review staff, community relations, continuing education, etc.

  22. Common Characteristics • Program models • Resource constraints • Evaluation tools • Politics and ethics • Cultural considerations • Presentation of evaluation findings

  23. Common Characteristics (continued) • Program models • Need blueprint as expressed by logic model • Program survival requires that evaluation be performed to maintain contracts • Outputs and outcomes monitored • Outputs are non-client related objectives • Outcomes are client related objectives • Infrastructure related objectives serve program maintenance function

  24. Common Characteristics (continued) • Resource constraints • Insufficient time, staff, money, or evaluation know-how • Typical implementation time • Needs assessment 3 to 6 months • Evaluability assessment 3 to 6 months • Process evaluation 12 to 18 months • Outcome evaluation 6 to 12 months • Cost-benefit analysis 1 to 2 months

More Related