1 / 11

Experimental Design & Analysis

Experimental Design & Analysis. Further Within Designs; Mixed Designs; Response Latencies April 3, 2007. Outline. Mixed, or split-plot, designs Response latency designs, a special case for split plots. Mixed Designs.

charo
Télécharger la présentation

Experimental Design & Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experimental Design & Analysis Further Within Designs; Mixed Designs; Response Latencies April 3, 2007 DOCTORAL SEMINAR, SPRING SEMESTER 2007

  2. Outline • Mixed, or split-plot, designs • Response latency designs, a special case for split plots

  3. Mixed Designs • The mixed factorial design is, in fact, a combination of a within-subjects design and a fully-crossed factorial design • A special type of mixed design, that is particularly common, is the pre-post-control design • All subjects are given a pre-test and a post-test, and these two together serve as a within-subjects factor • Participants are also divided into two groups • One group is the focus of the experiment (i.e., experimental group) • Other group is a base line (i.e., control) group.

  4. Mixed Designs • Also known as split-plot designs, mixed designs originated in agricultural research • Seeds were assigned to different plots of land, each receiving a different treatment • Subjects (ex., seeds) are randomly assigned to each level (ex., plots) of the between-groups factor (soil types), prior to receiving the within-subjects repeated factor treatments (ex., applications of different types of fertilizer)

  5. Mixed Designs • Partitioning the variance • Estimate between-groups effect • Estimate within-groups effect • Estimate interaction • Using error terms to estimate effects, significance • Between-subjects error term is used for between groups-effect • Within-subjects error terms is used for within-groups effect • Within-subjects error terms is used for the interaction since this includes a within-subjects effect

  6. Mixed Designs • Consider mixed design in which factor A is between-subject factor and B is within-subject factor: Ax(BxS) • Subjects factor is crossed with B but nested in A • Half of subjects see a1b1, a1b2 and half of subjects see a2b1, a2b2 conditions • Sources of variability: A, B, AxB, S(A), BxS(A) • To test effects: compare mean square of effects (A, B, AxB) with mean square for effects with subjects (MSS(A), MSBxS(A), MSBxS(A))

  7. Mixed Designs • Take Keppel’s example of kangaroo rats on p. 438-9 • Between-subjects: 3 kinds of rats (A) • Within-subjects: number of landmarks (B) • Each subject is tested at 4 levels of B

  8. Mixed Designs • Another way to analyze the data is to look at multivariate analysis • Treats each rat’s scores as a vector • All the b scores for a rat represents a single multivariate observation • Assumptions for data • Sphericity: homogeneity of variances for within-subject data • Homogeneity of covariance: for within- and between-subject data

  9. Mixed Designs • We examine the effects of a new type of cognitive therapy on depression • Give a depression pre-test to a group of persons diagnosed as clinically depressed and randomly assign them into two groups (traditional and cognitive therapy) • After the patients were treated according to their assigned condition for some period of time, they would be given a measure of depression again (post-test) • This design consists of one within-subject variable (test), with two levels (pre- and post-), and one between-subjects variable (therapy), with two levels (traditional and cognitive)

  10. Within-Subject Designs: Usefulness Researchers using the pre-post-control design look for an interaction such that one cell in particular stands out, and that is the experimental group’s post test score. Ideally the pre-test scores will be equivalent. It is the post-test score difference between the experimental and control that is important.    

  11. Pre-test Post-test: Alternatives • One-way ANOVA on the posttest scores • Ignores pretest data and is not recommended • Split-plot repeated measures ANOVA • Between-subjects factor is the group (treatment or control) and the repeated measure is the test scores for two trials. The resulting ANOVA table will include a main treatment effect (reflecting being in the control or treatment group) and a group-by-trials interaction effect (reflecting treatment effect on posttest scores, taking pretest scores into account) • One-way ANOVA on difference scores • Difference is the posttest score minus the pretest score and is equivalent to a split-plot design if there is close to a perfect linear relation between the pretest and posttest scores in all treatment and control groups • This linearity will be reflected in a pooled within-groups regression coefficient of 1.0. When this coefficient approaches 1.0, this method is more powerful than the ANCOVA method. • ANCOVA on the posttest scores • Using the pretest scores as a covariate control. When pooled within-groups regression coefficient is less than 1.0, the error term is smaller in this method than in ANOVA on difference scores, and the ANCOVA method is more powerful.

More Related