1 / 97

Research: Design and Outcome

Research: Design and Outcome. Lecture Preview. Research Methods and Designs Cross-Sectional and Longitudinal Designs Treatment Outcome Research Questions and Challenges in Conducting Treatment Outcome Research Contemporary Issues in Clinical Psychology Treatment Outcome Research

donald
Télécharger la présentation

Research: Design and Outcome

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research:Design and Outcome

  2. Lecture Preview • Research Methods and Designs • Cross-Sectional and Longitudinal Designs • Treatment Outcome Research • Questions and Challenges in Conducting Treatment Outcome Research • Contemporary Issues in Clinical Psychology Treatment Outcome Research • How and Where Is Research Conducted in Clinical Psychologyand How Is It Funded?

  3. practitioners (clinicians) conduct psychotherapy • investigators (scientists, researchers) conduct research.

  4. Research forms the foundation of clinical psychology. Basic and applied research provides the clues to questions about diagnosis, treatment, and general human behavior. Research allows practitioners to apply their techniques and theories with confidence. Psychology is the only mental health discipline that has its roots in academic research rather than in practice. Psychiatry, social work, and marriage and family counseling have their roots in practice rather than in research.

  5. The scientist-practitioner (Boulder model), the scholar-practitioner (Vail model) and new clinical scientist model emphasize the value of conducting research. Clinical psychologists conduct research in hospitals and clinics, schools and universities, the military, and business settings.

  6. Research is needed not only to better understand human behavior but also to develop psychological assessment techniques and treatment strategies that are reliable, valid, and effective.

  7. Tensions have existed between the research and applied interests of psychology since clinical psychology began in 1896. Clinicians feel that researchers conduct studies that are too difficult to understand or irrelevant to be of help with actual patients, while researchers feel that clinicians provide services that feel right rather than selecting those that are supported by empirical research.

  8. Research Methods and Designs The goal of research is to acquire knowledge about behavior and to use this knowledge to help improve the lives of individuals, families, and groups. Clinical psychologists use the scientific method in conducting research. The scientific method is a set of rules and procedures that describe, explain, and predict a particular phenomenon.

  9. Research Methods and Designs This method includes the observation of a phenomenon, the development of hypotheses about the phenomenon, the empirical testing of the hypotheses, and the alteration of hypotheses to accommodate the new data collected and interpreted.

  10. Research Methods and Designs Firstly, the clinical psychologist must objectively describe a given phenomenon. We term “Operational definition”. DSM-V is a tool for this purpose. Then, a hypothesis must be developed and tested to explain the behavior of interest. For example, researchers may be interested in the level of social support on depr. They may hypothesize that depressive patients with high social support improve more than patients with low social support.

  11. Research Methods and Designs Once a hypothesis is developed, it must be tested to determine its accuracy and usefulness and adapted to accommodate consistent and inconsistent research findings. A valid hypothesis can be used both to explain and to predict behavior. Many different types of research experiments and investigations are used to test hypotheses.

  12. Experiments Conducting an experiment is the way to utilize the scientific method in answering research questions. For example, suppose we were interested in designing a procedure for reducing test-taking anxiety. We wish to find out if relaxation or aerobic exercise might be useful in helping to reduce test anxiety prior to a stressful exam.

  13. Experiments First, a hypothesis is needed. We may believe that while both aerobic exercise and relaxation might help to lower test-taking anxiety relative to a control condition, the relaxation technique might prove the superior method. Relaxation has been shown to be helpful with other types of fears and anxieties, and it helps to reduce the physiological arousal (e.g., elevated heart rate and blood pressure) associated with anxiety.

  14. Independent and Dependent Variables After a hypothesis is proposed, an experiment must be designed to evaluate the hypothesis. The researcher must select both independent and dependent variables. The variable manipulated by the researcher is theindependentvariable(IV).Treatment condition (i.e., relaxation, aerobic exercise) would be the IV in the test-anxiety study.

  15. Independent and Dependent Variables The variable expected to change as a result of experimental manipulation is the dependentvariable (DV). The DV is what is measured by the researcher to determine whether the hypothesis can be supported or not. Scores on a test-anxiety scale following treatment might be the DV. Research studies evaluate the influence of the IV(s) on the DV(s).

  16. Minimizing Experimental Error A critical goal of all experiments is to minimize experimental error. Experimental error occurs when changes in the DV are due to factors other than the influence of the IV. For example, if the experimenter is aware of the hypothesis that relaxation is superior to aerobic exercise in reducing test-taking anxiety, the experimenter biases may influence the results. This is termed experimenter expectancy effects.

  17. Minimizing Experimental Error The experimenter must minimize potential error or bias by using a research assistant who was unaware of (blind) the hypotheses of the study; by using a randomization procedure. Randomization: The experimenter randomly varies a variable across experimental and control conditions. The researcher would randomly assign the research subjects to experimental and control conditions. So, the potential influence of the confounding variableswould be distributedacross experimental and control conditions.

  18. Minimizing Experimental Error Experimenters must use both reliable and valid measures. Reliability refers to the stability or consistency of a measurement procedure. A method for assessing test anxiety should result in similar scores whether the test is administered at different times or by different researchers. Validityrefers that an instrument should measure what it was designed to measure. Any measures used in a research must have adequate reliability and validity.

  19. Maximizing Internal and External Validity Research experiments must be designed to maximize both internal and external validity.

  20. Internal validity refers to the condition in which only the influence of the IV accounts for results obtained on the DV. It’s the extent to which an experiment rules out alternative explanations of the result. Any potential extraneous influences on the DV (other than the influence of the IV) becomes a threat to the experiment’s internal validity. The factors other than the IV that could explain the results are called threats to internal validity.

  21. Examples of threats to internal validity Extraneous variables that may threaten the internal validity include the effects of : • History • Maturation • Testing • Instrumentation • Statistical Regression • Selection Bias • Experimental Mortality

  22. Historyrefers to events outside the experimental situation (e.g., earthquakes, death of a loved one) that could have a significant impact on the results of the study. Any event occuring in the experiment may account for the results. Maturationrefers to changes within subjects over the passage of time (e.g., aging; becoming fatigued, bored, or stronger).

  23. Testingconcerns the influence of the testing or evaluation process itself on research results such as in the use of repeated measures obtained on the same subjects over time. Practice or carry over effects means that experience in an early part of the experiment might change behavior in a later part of it. Instrumentationrefers to the influences of the tests and measurement devices used to measure constructs in the study. Subjects may respond differently on a scale at different periods of the experiment.

  24. Statistical regressionconcerns the tendency of extreme scores on a measure to move toward the mean over time. Experimental mortalityrefers to attrition or subject drop out in an experiment.

  25. Selection biasrefers to a differential and problematic selection procedure for choosing research subjects. For example, bias would occur when students selected to participate in the experimental treatment groups on test taking anxiety are selected from a clinic while control subjects are selected from a psychology class. Bias occurs since the treatment and control subjects were selected from different populations of students.

  26. External validity refers to the generalizability of the research results beyond the condition of the experiment. • The more similar the research experiment is to a “real world” situation, the more generalizable the findings. • However, the more careful an experimenter isabout maximizing internal validity, the morelikely experimenter will minimize external validity. A high degree of control is necessary to minimize experimental and random error and thus maximize internal validity.

  27. Examples of threats to external validity Researchers must carefully examine threats to external validity prior to conducting their experiments. • Testing • Reactivity • Multiple-Treatment Interference • Interaction of Selection Biases

  28. Testingrefers to the use of a questionnaire or assessment device that may sensitize and alter the subject’s response and therefore influence the DV. Reactivityconcerns the subject’s potential response to participating in an experiment. The subject may behave differently in an experiment than in the natural environment. For example, a subject who knows that he or she is being observed during an experiment may behave in a more socially desirable manner. Social desirability

  29. Multiple-treatment interferencerefers to exposing a subject to several treatment conditions such that the experimenter cannot isolate any specific condition. For example, a subject in the relaxation condition may receive a videotape that presents relaxing music, nature scenes, and instructions in guided imagery and progressive muscle relaxation. Interaction of selection biasesconcerns the notion that subjects in one group may have been differentially responsive to the experimental condition in some unique manner.

  30. Experimental Designs There are many different designs of carrying out an experiment. Each design offers advantages and disadvantages. To use the right experimental design with the right research question and to construct each experiment to maximize both internal and external validity is important.

  31. True Experimental Designs To demonstrate cause-and-effect relationships, we must conducte true experiments. They use randomization. Randomizationis a procedure where subjects are selected in such a way that they all have an equal chance of being placed in the different control and experimental groups.

  32. True Experimental Designs • IV is manipulated. • DV is measured. • There must be at least two groups/ two levels of IV.

  33. True Experimental Designs Several unique challenges are associated with such studies: • It is often impossible or unethical to randomly assign subjects to certain experimental or control conditions, in the case of effects of sexual abuse or maternal deprivation. • It is often impossible or unethical to assign patients to a control condition in which they do not receive treatment. It would be unethical to assign suicidal patients to a control condition for several months without any treatment. Waiting list and placebo treatment.

  34. True Experimental Designs • Certain disorders are rare, it is difficult to obtain enough subjects for experimental and control conditions. For example, trichotillomania. • Because many patients have several diagnoses, comorbidity is not rarely. It is often difficult to find people who experience pure specific disorder.

  35. In addition to true experimental designs, there are quasiexperimental designs; between, within, and mixed group designs; analogue designs; case studies; correlational methods; epidemiological methods; and longitudinal and crosssectional designs. Many of them are not mutually sole. Correlational designs can be either longitudinal or cross-sectional, or both. A study can include both between and within group designs. The experimental and quasi-experimental approaches can also use between, within, and mixed group designs.

  36. Quasi-Experimental Designs When randomization is impossible, an experimenter may choose to use a quasi-experimental design. For example, a treatment-outcome study conducted at a child guidance clinic must use patients already being treated at the clinic. Because the experimenters cannot decide who can receive treatment and who must remain wait-listed, randomization is impossible.

  37. Between Group Designs use two or more separate groups of subjects. Each group receives a different type of intervention or control group receives no intervention. The IV is manipulated by the experimenter so that different groups receive different types of experiences. In the test-taking anxiety example, one group received relaxation, a second group received aerobic exercise, while a third group received a control condition.

  38. Between Group Designs Ideally, subjects are randomly assigned to treatment and control conditions. To ensure that gender and age are similar in each experimental and control condition, the experimenter would match subjects such that males and females as well as different ages are distributed across the groups. There are different types of between group designs.

  39. Between Group Designs The pretest-posttest control group design includes two or more subject groups. While one group receives treatment, the other does not. Subjects are evaluated both before and after treatment on the dimension of interest. For example, a test-anxiety questionnaire might be used both before the treatment begins and after the completion of treatment. Control subjects would complete the test anxiety questionnaire at the same time the experimental group completes the materials.

  40. The pretest-posttest design’s disadvantage: the administration of a pretest might sensitize subjects or influence their response to treatment.

  41. Between Group Designs The factorial design provides an opportunity to study two or more factors in a given study. Two IVs (e.g., gender and ethnic background of therapist) can be examined at the same time. For example, treatment might be conducted with four groups: male African American therapist, female African American therapist, male Caucasian therapist, female Caucasian therapist. This would be considered a 2 × 2 factorial design.

  42. Adding two additional ethnic groups to the design (e.g., Asian American, Hispanic American) would create a 2 (gender) × 4 (ethnicity) factorial design. The factorial design’s advantage: the experimenter can examine the role of interactions between factors.

  43. Within Group Designs are used to examine the influence of the IV (such as treatment) on the same subjects over time. Subjects are not assigned to different experimental and control groups. Subjects are assigned to the same research procedure or treatment. The same patient is examined at different points of time, such as during a baseline or pretreatment period, a treatment intervention period, and a follow-up or posttreatment period. Memory and attention deficit on anxiety patients taking SSRI can be examined pretreatment period, a treatment period, and a posttreatment period.

  44. Within Group Designs Experimenters using this design must be careful with ordering or sequencing effects. Ordering effects refers to the influence of the order in which treatment or experimental conditions are presented to the subjects.

More Related