1 / 39

Design Experiments

Design Experiments. February 16, 2010. Today’s Class. Probing Question Design Experiments Assignment will be handed out this evening. Probing Question. Observation: Relatively few researchers use power analysis when designing their studies. Why? Are they making a mistake?.

tass
Télécharger la présentation

Design Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design Experiments February 16, 2010

  2. Today’s Class • Probing Question • Design Experiments • Assignment will be handed out this evening

  3. Probing Question • Observation: Relatively few researchers use power analysis when designing their studies. • Why? • Are they making a mistake?

  4. Today’s Class • Probing Question • Design Experiments • Assignment will be handed out this evening

  5. Design Experiments(Collins, 1999; Collins et al, 2004) • Seven attributes of design experiments (relative to laboratory research) are listed by Collins et al • I’d like to discuss them one by one • Do you agree with doing this when you do educational research? • Raise your hand • What are the benefits and drawbacks of this research design choice?

  6. Design Experiments(Collins, 1999; Collins et al, 2004) • Laboratory settings versus messy situations “Experiments conducted in laboratories avoid contaminating effects. Learners concentrate on the task without any distractions or interruptions. The materials to be learned are well defined and are presented in a standardized manner. Design experiments are set in the messy situations that characterize real life learning, in order to avoid the distortions of laboratory experiments.”

  7. Design Experiments(Collins, 1999; Collins et al, 2004) • A single dependent variable vs. multiple dependent variables

  8. Design Experiments(Collins, 1999; Collins et al, 2004) • Controlling variables vs. characterizing the situation

  9. Design Experiments(Collins, 1999; Collins et al, 2004) • Fixed procedures vs. flexible design revision “Psychological experiments follow a fixed procedure that is carefully documented, so that it can be replicated by other experimenters. Design experiments, in contrast, start with planned procedures and materials, which are not completely defined, and which are revised depending on their success in practice.”

  10. Note • Actually changing your study design in the middle of the study! • Going back to the lab at night and changing the design of the software (e.g. Cobb et al, 2001) • Thoughts on this?

  11. Note • Can you make inferences from your study to when (in the future) your intervention is used without active re-design and re-implementation by high-quality large team?

  12. Design Experiments(Collins, 1999; Collins et al, 2004) • Social isolation vs. social interaction “In most psychological experiments, the subjects are learning in isolation. There is no interaction with other learners and usually no interaction with a teacher or expert; the material to be learned is simply presented by text or video. By contrast, design experiments are set in complex social situations, such as a classroom.”

  13. Design Experiments(Collins, 1999; Collins et al, 2004) • Testing hypotheses vs. developing a profile “In psychological experiments the experimenter has one or more hypotheses, which are being tested by systematically varying the conditions of learning. In design experiments the goal is to look at many different aspects of the design and develop a qualitative and quantitative profile that characterizes the design in practice.”

  14. Design Experiments(Collins, 1999; Collins et al, 2004) • Experimenter vs. co-participant design and analysis “In psychological experiments the experimenter makes all decisions about the design and analysis of the data, in order to maintain control of what happens and how it is analyzed. In design experiments, there is an effort to involve different participants in the design, in order to bring their different expertise into producing and analyzing the design”

  15. 7 aspects • Laboratory settings versus messy situations • A single dependent variable vs. multiple dependent variables • Controlling variables vs. characterizing the situation • Fixed procedures vs. flexible design revision • Social isolation vs. social interaction • Testing hypotheses vs. developing a profile • Experimenter vs. co-participant design and analysis

  16. 7 aspects • Across the room, we see that there is agreement with some of these aspects and disagreement with others

  17. Controlled experiments in school • Which of the 7 are present in this type of research? • What are the key differences between Design Experiments and RCTs?

  18. Controlled experiments – which? • Laboratory settings versus messy situations • A single dependent variable vs. multiple dependent variables • Controlling variables vs. characterizing the situation • Fixed procedures vs. flexible design revision • Social isolation vs. social interaction • Testing hypotheses vs. developing a profile • Experimenter vs. co-participant design and analysis

  19. Comparing… • Design experiments .vs. Controlled experiments in school • What are the relative benefits/drawbacks of each?

  20. Collins et al on controlled studies(Do you agree? Disagree?) “Large-scale studies of educational interventions use a variety of measures to determine the effects of a program or intervention. The methods usually emphasize standardized measures and survey of critical participants, not tied to any particular design. These studies can be used to identify critical variables and to evaluate program effectiveness in terms of test scores, but they do not provide the kind of detailed picture needed to guide the refinement of a design. They are crucial however for summative research, and in our discussion of how design experiments methodology might be extended to summative research, we borrow from this kind of methodology.”

  21. Do you agree? Disagree

  22. Do controlled classroom studies have to be large-scale?

  23. Barab & Squire (2004) • “What separates design-based research in the learning sciences from formative evaluation is • (a) a constant impulse toward connecting design interventions with existing theory • (b) the fact that design-based research may generate new theories (not simply testing existing theories) • (c) that for some research questions the context in which the design-based research is being carried out is the only context where the claims can be validly studied”

  24. Do you agree that more controlled classroom research is less able to address these issues? • “What separates design-based research in the learning sciences from formative evaluation is • (a) a constant impulse toward connecting design interventions with existing theory • (b) the fact that design-based research may generate new theories (not simply testing existing theories) • (c) that for some research questions the context in which the design-based research is being carried out is the only context where the claims can be validly studied

  25. Barab & Squire argue • “Design-based research requires more than simply showing a particular design works but demands that the researcher (move beyond a particular design exemplar to) generate evidence-based claims about learning that address contemporary theoretical issues and further the theoretical knowledge of the field.”

  26. Do you agree with this goal? • “Design-based research requires more than simply showing a particular design works but demands that the researcher (move beyond a particular design exemplar to) generate evidence-based claims about learning that address contemporary theoretical issues and further the theoretical knowledge of the field.”

  27. Do Design Experiments or controlled studies support this goal more? • “Design-based research requires more than simply showing a particular design works but demands that the researcher (move beyond a particular design exemplar to) generate evidence-based claims about learning that address contemporary theoretical issues and further the theoretical knowledge of the field.”

  28. Barab & Squire “Design-based researchers not only recognize the importance of local contexts but also treat changes in these contexts as necessary evidence for the viability of a theory. Design-based research that advances theory but does not demonstrate the value of the design in creating an impact on learning in the local context of study has not adequately justified the value of the theory.”

  29. Aside from… • Aside from whether design experiments is the best way to achieve this goal (which we just talked about) • Do you agree that producing concrete changes in real learners in real settings is necessary for learning theory to be valuable?

  30. Barab & Squire “One of the central ideas in the scientific paradigm is replicability; however, because design-based researchers cannot (and may not want to) manipulate cultural contexts, it becomes difficult to replicate others’ findings (Hoadley, 2002). Therefore, the goal of design-based research is to lay open and problematize the completed design and resultant implementation in a way that provides insight into the local dynamics. This involves not simply sharing the designed artifact, but providing rich descriptions of context, guiding and emerging theory, design features of the intervention, and the impact of these features on participation and learning.”

  31. Note • There are replicable findings in classrooms about learning methods (like for Cognitive Tutors and Accountable Talk) • Though replication is not always perfect • What aspects of design experiments might explain lower replicability?

  32. Your thoughts on “not simply sharing the designed artifact, but providing rich descriptions of context, guiding and emerging theory, design features of the intervention, and the impact of these features on participation and learning.”

  33. Going further • “any classroom context, even without the manipulations of a design researcher, is impacted by the systemic contraints in which it is nested, thereby making the generalizabilityof any naturalistic findings highly suspect.” • How could this problem be addressed, in either paradigm?

  34. Design Experiments – where?

  35. Important Reminder • Which we’ll talk about more next week

  36. Important Reminder • Dominant role of Design Experiments in the Learning Sciences today • “Over the next 2 years, the Journal of the Learning Sciences is especially interested in research that falls under the umbrella of design-based research—as these articles are accepted we will place them on the JLS website so that others can continue the dialogue.” (Barab & Squire, 2004)

  37. JLS review excerpt Taken on face value, their overall point about difficulty factors is well made. Yet, since the activities are being described, retrospectively, it is not clear how much prospective design was involved. For that reason, it is hard to know how this work fits, if at all, into what is currently called "design research" at least as illustrated by the studies cited here. If anything, this study shows the mirror image of current design research by its insistence on remaining within an ACT-R model of mathematics, using large scale quantitative studies, objective measures, and an unwillingness to fundamentally alter the central parameters of the initial software design. "Not that there is anything wrong with that" (Seinfeld, c 1998).What we have here is a lessons-learned review of a set of studies. Clearly, things were learned and outcomes were influenced. Presumably, this is a hallmark of all productive research programs. What makes this study illustrative of design? It appears to be quite mainstream psychology/expert system/cognition work. Methodologically, it is more reminiscent, perhaps, of "traditional" curricular program evaluation with clear adherence to formative and summative feedback via tests. Given that the difficulty factors approach really boils down to the quality of measures, I was disappointed at the absence of discussion about the conceptual and psychometric qualities of the instruments.This latter point underscores the desiccated nature of the paper from a design perspective. It would be far more interesting to hear these same authors not just review their own work as it is constrained by the furrow of ACT, but rather consider the design paths not taken. These paths are conceptual, methodological, and instrumental.

  38. Bottom line • Important to define yourself in relation to this approach if you want to publish in JLS • (Not as important if you want to publish in J. Ed. Psych, or IJAIED, or AERJ…)

  39. Today’s Class • Probing Question • Design Experiments • Assignment will be handed out this evening

More Related