1 / 19

Psychological Adjustment of Adoptees: A Meta-Analysis

Psychological Adjustment of Adoptees: A Meta-Analysis. Michael Wierzbicki. Previous Research. Overrepresentation of adoptees in clinical samples Taken as evidence of greater maladjustment in adoptees

Télécharger la présentation

Psychological Adjustment of Adoptees: A Meta-Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Psychological Adjustment of Adoptees: A Meta-Analysis Michael Wierzbicki

  2. Previous Research • Overrepresentation of adoptees in clinical samples • Taken as evidence of greater maladjustment in adoptees • Clinic studies: compare adopted and non-adopted on measures of severity or frequency of disorders • Non-clinical studies: Compare adoptees and non-adoptees in the general population • Conclusions: • Adoptees are overrepresented in the clinic populations • Non-clinic comparisons find only minor differences in adjustment • Adoptees experience more difficulties in externalizing disorders

  3. Flaws • Methodological • Example: diagnoses and ratings of adoptees were not always made by judges who were blind to adoptive status • Response bias? • Heterogeneity • Example: studies included children and adolescents, inpatients and outpatients… • Moderators? • Adoption status  ?  Adjustment

  4. What is a meta-analysis? • Assist reviewers in summarizing and interpreting the results of a large and diverse body of studies • Combines the results of several studies that address a set of related research hypotheses • Normally done by identification of a common measure of effect size • More powerful than a single study

  5. Purpose • Conduct a meta-analysis of studies of psychological adjustment of adoptees • Hypotheses examined: • Adoptees have greater psychological maladjustment • Adoptees are overrepresented in clinical populations • Adoptees have more externalizing disorders • Also examined whether differences in adjustment were related to characteristics of the study, including methodological and subjective variables

  6. Method – Sample Criteria • (1) Were published in English • Excluded non-English written studies, dissertations, unpublished manuscripts, etc. • (2) Were not limited to the adopted-away children of parents with psychological disorders • Excluded genetic influences • (3) Reported sufficient data to permit effect size calculation for differences in adjustment • Excluded studies with insufficient data and those that did not compare adoptees to nonadoptees

  7. Method – Effect Sizes (d) • From 66 studies to 106 sub-studies • Duplicate subject groups eliminated • Separate effect sizes reported when data was sufficient (e.g., gender, age) • Studies reporting test statistics were converted to d • Comparisons classified into 2 groups: • Clinical percentages  reported percentages of adoptees in the clinical population • Compared to .02 national adoption rate • Group comparisons  comparisons of adoptees and nonadoptees on a measure of psychological functioning • Further divided (r = .85): internalizing, externalizing, academic, neurological, psychotic, general severity, other

  8. Method – Variable Coding • Studies were coded according to 14 variables (r = .91; all r > .70) • Year of publication, Adoptee sample size, Total sample size, Sample type, Patient, Study type, Blindness of ratings, Adoption type, Self/other ratings, Objectivity of ratings, Percent male, Mean age of adoptees, Age range of adoptees, and Median age at adoption • Reliability: all r > .70, median r = .91

  9. Method - Results • Mean effect sizes calculated for clinical percentages and sub-categories of group comparisons, as well as a within-study effect • Significance levels are Bonferroni corrected (.01 and .05) because multiple tests were conducted • Adoptees were significantly higher in maladjustment than nonadoptees in their representation in clinical samples and on comparisons of externalizing disorders, academic problems, and general severity (see next slide)

  10. Method – Results (Cont.)

  11. Method – Results (Cont.) • ANOVA results: • Mean effect sizes differed significantly across clinical percentages and group comparison categories, F(7,253) = 11.18, p < .01 • Planned comparisons: • Mean clinical percentage effect size was significantly greater than the mean group comparison effect size, F(1,116) = 22.95, p < .01 • Mean externalizing effect size was significantly greater than the mean internalizing effect size, F(1,83) = 16.86, p <.01

  12. Method – Results (Cont.) • Weighted least squares regression analyses were conducted to examine the relationship between effect size and both sample and methodological characteristics • Question: Why is a weighted least squares regression used? • Answer: An assumption of linear modeling is that the standard deviation of the error is constant over all values of the predictor. This is frequently not the case, however. As such, it may not be reasonable to assume that every observation should be treated equally. For example, a procedure that treats all of the data equally would give less precisely measured points more influence than they should have. In this study, weighted least squares is used to ensure each data point has the proper amount of influence over the parameter estimates.

  13. Method – Results (Cont.) • Three of the predictors had substantially more cases of missing data than other variables (percent male, mean age, and age range), so two dummy variables were created to incorporate this missing data • Question: Why were dummy variables utilized? • Answer: When you have a large quantity of missing data, you may want to determine if the data are missing at random or if there is bias in the missing data pattern. To address this, dummy variables can be created and then correlated with other variables to see if they are associated.

  14. Method – Results (Cont.)

  15. Discussion • Within-study effect size = .72, indicating that adoptees are higher in maladjustment • Problem: What’s the cause? • Genetic? • Environment? • Selection bias?

  16. Discussion (Cont.) • Problem: Beta weights generated in the stepwise regression analysis are unstable • Question: Wienbicki (1993)mentions that results should be interpreted with caution because the beta weights were unstable. What makes a beta weight stable? I assume stability is associated with small changes in beta in the various steps in an analysis. With that in mind, are there are any hard and fast rules concerning what constitutes stable beta weights? • Answer: ??

  17. Discussion (Cont.) • Sample size was the strongest predictor of effect size in studies of adoptees in clinic populations; inversely related • Bias in only publishing studies with high effect sizes • Only published studies in this meta? • Objectivity of ratings; as ratings of internalizing problems became more subjective, effect size increased • Bias against adopted individuals

  18. Discussion (Cont.) • Effect sizes for both internalizing and externalizing disorders was related to mean age of adoptees • Sample contained mostly children and adolescent populations • Also exemplified by the fact that age range was negatively related to internalizing effect size

  19. Other Questions • The authors talk about using a technique for incorporating missing data into the regression analyses via a dummy variable.  I noticed that, in some cases, the dummy variable is significant in the regression analysis, what does this mean or how is it interpreted? • Answer: As previously mentioned, this may imply that the data are not missing at random • In your article, in conducting the meta analysis there were comparisons made based on clinic percentages and group comparisons, effect sizes calculated from comparison of adoptees and non adoptees on a measure of psychological functioning. Within the group comparison, effect sizes were further broken down into subcategories, internalizing, externalizing, academic, etc. My question is, how the different subcategories were determined, could it possibly affect the outcome of the study if the meta-analysis looked at depressed vs. anxious adoptees or aggressive vs. hyperactive etc. instead of grouping them into one category? • Answer: Because the analysis was conducted with such a small sample size to begin with, further dividing the subcategories would lower power even more. This further division of subcategories may be more specific, but a much larger sample size would be necessary.

More Related