1 / 7

Recoding behavioral self-reg data into item-level binary variables that can be scored

Learn how to recode and score behavioral data into binary items for more meaningful analysis. This workshop will cover techniques for converting behavioral measures into "total correct" or "above threshold" scores, with a focus on latency and duration variables.

lisettet
Télécharger la présentation

Recoding behavioral self-reg data into item-level binary variables that can be scored

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recoding behavioral self-reg data into item-level binary variables that can be scored Spencer meeting IHDSC, New York University Mar 24th Part 2

  2. The challenge: • The metric is less meaningful when we standardize behavioral data (e.g. latency to touch prohibited toy as Z-score). • Alternate solution: • Rescale the observed indicators into binary items so that can create “total correct” or “above threshold” on set of behavioral measures. • Model children’s scores on aggregates of “total correct” on behavioral measures, • focusing on latent θ • Additional twist– (to lesser or greater extent), binary items are conditionally related. Reardon & Raudenbush (2006).

  3. Not new…. • Delay tasks often rely on measurement of latency and duration– waiting longer before touching/receiving prohibited object • Mischel, Rodriguez, Aber, Denton • Age-related change in: • length of time able to wait • “strategy” of self-regulation, e.g. cold vs. hot cognitions • In contrast, measures of attention and executive function (CPT, Peg-Tap) routinely scored as 0/1 across a large number of trials. • Blair, NICHD team • Age-related change in: • Number of trials administered • Number of trials correct • EF work by Carlson (2005) and others recommends scoring P/F for ease of scalability • Carlson, S. M. (2005). Developmentally sensitive measures of executive function in preschool children. Developmental Neuropsychology, 28, 595–616.

  4. Example already completed…no recoding necessary • The Continuous Performance Task (CPT) is a measure of sustained attention. A CPT modeled on the young children’s version described by Mirsky and his colleagues was used (e.g., Mirsky, Anthony, Duncan, Ahearn, & Kellam, 1991; Rosvold, Mirsky,Sarason, Bransome, & Beck, 1956). • In this computer-generated task, dot matrix pictures of familiar objects (e.g., butterfly, fish, flower) were presented on a 2-inch square screen in front of the child. The child was asked to press a button each time a target stimulus appeared. • At 54 months, • the stimuli were presented in 22 blocks. 10 stimuli/block, stimulus duration 500 msec • At 1st grade, • the stimuli were presented in 30 blocks. T10 stimuli/block, duration was 200 msec • At 4th grade, • 45 blocks, 12/block, duration was 200 msec. • 1 Response Correct (RC): Child presses button when target stimulus is present. • 0 Omission (O): Child does not press the button when the target stimulus is present. • 0 Incorrect (I): Child presses the button when a stimulus other than the target stimulus is present. • Additional data that might be capitalized on: • Child needed 1 or more re-directs– approx 5% of sample • Child responded late or very late

  5. Options • Simple: Correct/trials completed = proportion correct • Complicated: Alternately, might want to construct matrix if child’s performance based on “gated” aspects of assessment P Treating performance as items A P F 5 A F P In 6 P Inattentive? F In Trial 5 fails F Can exper administer Trial 6? Kid tuning out?

  6. Delay task data – not quite as simple, but not excruciating • Can’t figure out scoring for 54m delay– Monica? • Using CSRP data • Giftwrap task– • Experimenter turns child’s seat around, tells child not to peak while wraps gift, noisily. Then turns child back around, and tells child not to touch. • Latency to peek • Latency to touch

  7. What would latencies on Giftwrap Tasklook like? Could simply score # ‘items correct’ e.g. total correct score = 5 OR, could consider data in conditional framework 60 sec 10 sec P P 60 P F F 10 P 1a P F P F 1b F F touches unwraps Toy wrap - peek Toy wrap - wait

More Related