1 / 1

Project VIABLE: Impact of Observation Period Duration on Direct Behavior Rating (DBR)

Project VIABLE: Impact of Observation Period Duration on Direct Behavior Rating (DBR). Rose Jaffery 1 , Christina Boice 2 , Amy L. Ivey 3 , Selena Waite 3 , T. Chris Riley-Tillman 3 , Theodore J. Christ 2 , & Sandra M. Chafouleas 1

robert-page
Télécharger la présentation

Project VIABLE: Impact of Observation Period Duration on Direct Behavior Rating (DBR)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Project VIABLE: Impact of Observation Period Duration on Direct Behavior Rating (DBR) Rose Jaffery1, Christina Boice2, Amy L. Ivey3, Selena Waite3, T. Chris Riley-Tillman3, Theodore J. Christ2, & Sandra M. Chafouleas1 University of Connecticut1, University of Minnesota2, East Carolina University3 Introduction Results The overall purpose of this study was to add to the growing base of literature regarding the technical adequacy of Direct Behavior Rating (DBR). DBR is a method of social behavior assessment that integrates components of systematic direct observation and behavior rating scales (Chafouleas, Riley-Tillman, & Sugai, 2007), allowing educators to efficiently and flexibly gather formative data on student behavior across a variety of contexts. In this study, specific research questions were focused around three areas of evaluation important to understanding technical adequacy of DBR, including accuracy of ratings with varied instrumentation (anchoring) and procedures (observation length), and test-retest consistency. First, when using DBR to rate pre-specified behaviors (academic engagement, disruptive behavior), (a) is the accuracy of obtained data impacted by the duration of the observation period (i.e., 5 minutes v. 20 minutes) and (b) how does an average rating across shorter observation periods (e.g., 5 minutes) compare to a single rating across a longer observation period (e.g., 20 minutes)? Second, when using DBR to rate those same pre-specified behaviors (academic engagement, disruptive behavior), is obtained data impacted by the anchoring of the DBR scale (i.e., proportion/ percentage of time v. absolute magnitude/ actual number of minutes)? Lastly, what is the one week test-retest consistency of DBR outcome data when rating the same behavior using a similar DBR form? (This research question was conditional on whether or not the results of the second research question indicated that DBR data is not impacted by the anchoring of the DBR scale). • Figure 1. Estimated marginal means across durations and observations for 20-minute observations: Duration 1 = average of four 5-minute observations; Duration 2 = average of two 10-minute observations; Duration 3 = one 20-minute observation. Anchor 1 = percent of time; Anchor 2 = absolute # of minutes. • Academic engagement was fairly consistent across the three durations and two 20-min observations. • This is expected as the target student’s academic engagement did not vary greatly across video clips. • Ratings of disruptive behavior varied significantly across the three durations and the two 20-min • observations. Specifically, the longer the duration of the observation period, the higher the ratings of • disruptive behavior were overestimated (e.g., ratings of 20-min observations were more highly • overestimated than ratings of 10-min observations). Also, since the amount the student was disruptive • during each observation did vary, it was found that the longer the student was disruptive during the clip, • the more the overestimation effect became pronounced. • Anchoring the DBR scale as proportional (percentage of time) v. absolute (actual # of minutes) had no • significant effect on the data. In other words, neither scale was more or less accurate. Table 2. Descriptive Statistics Tests of repeated measures multivariate analysis of variances (MANOVA) examined the multivariate combination for ratings of academic engagement and disruptive behavior within each of two studies (10-min observations v. 20-min observations). Each of the two studies tested random effects for within-subject factors across duration, anchors, and observation periods. Corrected omnibus MANOVA values were used to counter any potential violations of the homogeneity of variance assumption. Follow-up univariate analyses were used to examine and interpret results for all levels of statistically significant multivariate effects. The process of analysis supported interpretation of main effects despite statistically significant interactions. Analyses suggested that ratings of academic engagement were not impacted by duration, however duration seemed to have an influence on ratings of disruptive behavior. For disruptive behavior, there were statistically significant differences between durations for both 10-min observations (univariate partial η2 was .48) and 20-min observations (univariate partial η2 was .36; see Figure 1). The results further revealed that anchoring the DBR scale as percentages v. actual number of minutes had no significant effect. Because these ratings had no effect and they were administered one week apart, we were able to analyze the ratings for test-retest reliability. Test-retest analyses revealed low to moderate consistency across time points for 10-min or 20-min observations. However, the means and standard deviations for the groups were quite consistent (see Table 3); furthermore, as the number of raters or number of ratings (e.g. four 5-min v. one 20-min) increased, higher levels of consistency were achieved. Table 3. Bivariate correlation table of engaged behavior during 20-minute conditions. Overall, participants’ ratings tended to overestimate the occurrence of both academic engagement and disruptive behavior. A significant overestimation effect for academic engagement was observed in 100% of the observation periods (30 of 30), and in 93% (28 of 30) for disruptive behavior. The Mean DBR was 1-2 points above the actual SDO DBR score. Method Participants included 81 undergraduate students enrolled in an introductory psychology course at a large university located in the Southeast; 21% reported current enrollment in a teacher education program. Participants were randomly divided into two DBR instrumentation conditions (proportional or absolute DBR scales). Participants then viewed eight 5-minute video clips of a child in a typical 3rd grade classroom. Subsequent to viewing each of the eight 5-minute clips, participants were asked to rate the target student’s behavior using the assigned DBR scale. Directions also required participants to make a general rating over the past 10 minutes or over the past 20 minutes in addition to rating only the past 5 minutes (Table 1). Table 1. Instructions given to participants after viewing each 5-minute clip Participants then returned one week later to rate the same eight clips using the alternate type of DBR scale. For example, if at session 1, a participant was given a DBR form with a proportional scale, then that participant used a DBR form with an absolute scale at session 2. Ultimately, every participant rated each of the eight 5-min video clips using both types of scales, thereby resulting in a fully crossed design. The outcome variable of interest was the rating assigned by the participant to the target student’s behavior. To assess accuracy, participant ratings were compared to researcher ratings of the same clips using systematic direct observation procedures and data were then converted into DBR scores (SDO DBR). Summary and Conclusions • Overall results suggest that the overestimation effect for DBR of academic engagement and disruptive behavior are not highly reliable in isolation. However, as findings regarding test-retest suggest, DBR can produce reliable results given a sufficient number of ratings. This information is positive because in practice, a number of DBR data points are collected given intended use in formative assessment. Also, given that anchoring of the DBR scale had no effect on accuracy, support for the flexibility of DBR is provided in that instrumentation can be created in alternate formats without impacting reliability of resulting data. • In conclusion, findings from this study provide preliminary information regarding the influence of the duration of an observation period on DBR, and contribute to the general psychometric evaluation of DBR. Although additional research certainly is needed in order to fully evaluate DBR as an assessment method, the current study adds significantly toward developing guidelines regarding recommended instrumentation and procedures. Preparation of this poster was supported by a grant from the Institute for Education Sciences (IES), U.S. Department of Education (R324B060014). For additional information, please direct all correspondence to Sandra Chafouleas at sandra.chafouleas@uconn.edu

More Related