1 / 31

Evaluating Comprehensive Equity Projects: An Overview

Evaluating Comprehensive Equity Projects: An Overview . Milbrey McLaughlin Stanford University November 17 2008. What is the “state of the art” of indicators and tools to evaluate a complementary learning system?. Presentation outline.

Télécharger la présentation

Evaluating Comprehensive Equity Projects: An Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Comprehensive Equity Projects: An Overview Milbrey McLaughlin Stanford University November 17 2008

  2. What is the “state of the art” of indicators and tools to evaluate a complementary learning system?

  3. Presentation outline • What are elements of a “good measure” for a comprehensive equity project • The theory of change that guides comprehensive equity initiative • Existing indicators : State of the art • Issues & challenges for a comprehensive equity initiative • Moving to an integrated data system

  4. What is a “Good Measure”--indicators for a complementary learning initiative • Meaningful and face valid to multiple stakeholders • Useful to policy makers and practitioners—actionable • Comparable over time & contexts • Reliable– can’t be manipulated • Practical to collect & analyze

  5. Elements of a good measurement system for a complementary learning initiative? • Looks across domains; collects positive as well as negative evidence • Attends to contextual considerations & factors outside policy maker/practitioner control • Asks questions stakeholders consider legitimate

  6. A good measurement system for a complementary learning initiative also… • Incorporates indicators that are accessible & relevant across agencies and levels • Secures buy-in of the multiple stakeholders involved…develops consensus about both measurement & use • Is parsimonious & efficient • Assesses the underlying theory of change

  7. Theory of Change:Complementary Learning Systemto Support Educational Equity Individual Level Outcomes Individual Level Impacts System Level change Setting Level change

  8. Indicators at each level include attitudinal, behavioral, knowledge and status data

  9. Individual-level indicators measure the “so what” Benefits to youth

  10. Individual-level Indicators • Academic Attainment & Attitudes • Status: achievement, school completion, • Behavior & attitudes: attendance, motivation • Knowledge: career, post-secondary options • Physical Development & Health • Status: pregnancy, obesity, • Behavior, attitudes: substance use, safe sex • Knowledge: healthy choices • Social & Emotional Development • Attitudes: connectedness, sense of efficacy, hope, self-regard, purpose

  11. Issues with Individual indicatorsfor a complementary learning system • Feasibility– moving beyond administrative data and the “anti’s” expensive and labor intensive • Achieving stakeholder consensus on ‘good measures’– paring down possibilities • Reliability & internal validity– cultural, developmental, contextual threats; variable rates over time • Conflating outcomes & impacts – need indicators of both

  12. Setting-level indicators measure Elements of a program thought to affect individual outcomes & Impact

  13. Setting Level Indicators • Participation –youth & parents • Professional capacity & staff support • Youth relationships with adults • Youth leadership/voice • Menu of opportunities/quality • Partnerships

  14. Issues with setting-level indicators: Practical • Feasibility—existing indicators typically part of costly program evaluations [surveys, observations, focus groups]; not replicable in an indicators system • Limited local capacity to collect & analyze data, especially among CBOs • Data politics– worries about revealing shortfalls, jeopardizing funding

  15. Issues with setting-level indicators: Technical • Generalizability– what is the “treatment” given situated practice. Whatever it takes… Churn in participants, staff and providers • Qualitative considerations– interpretation more than enumeration • Unexamined context considerations & environmental shifts-- misattribution

  16. System-level indicators measure elements of the relevant policy system that affect the setting-level indicators important to youth outcomes

  17. System Level Indicators • Committed, stable resources—financial & technical • Cross-Agency/sector collaboration • Dedicated infrastructure to support new institutional relationships • Capacity to provide support, use indicators • Political backing for the initiative & broad stakeholder support • policy accommodation—waivers, incentives, e.g.

  18. Issues with System Level Indicators • Few models or indicators exist, especially at state level– needs development • No great “felt need”- focus on outcomes • Political challenges – looking across agencies and budgets for evidence of collaboration/ data integration

  19. Without cross-agency, cross-level indicators … • Cannot assess a complementary learning system’s theory of change • Difficult to monitor progress toward educational equity in terms of “inputs” • Difficult to track outcomes and progress across levels, identify implementation issues, and make necessary adjustments

  20. Needed: An Integrated Data System • Local report cards- Philadelphia, Baltimore • South Carolina- across public agencies • Chapin Hall– Integrated Data Base on Child and Family Programs in IL. • The Youth Data Archive, John W. Gardner Center, Stanford

  21. What is the Youth Data Archive? • Links existing data from multiple agencies • Community resource to answer questions about youth in the larger environment • Supports inter-agency collaboration to improve service delivery and youth outcomes

  22. YDA differs from existing data integration efforts • Not a data “warehouse”– a partnership with existing communities and agencies; specific attention to stakeholder buy in • Includes CBOs, non-profits & qualitative data • Includes data from various system levels • Matches data at the individual level

  23. Types of Data Ideally Included in Archive

  24. Some types of analyses… • Event Histories– considering ‘value added’; pathways • Comparative contributions of similar resources • Conditional ‘treatment’ effects; effects on subpopulations • Cross-contexts comparisons [demographic, SES, ‘treatment’, pathways]; create a natural experiment

  25. English Measure 1: Improvement in English Language CELDT Score:service comparisons *Significant difference at 90% level **Significant difference at 95% level

  26. Mental Health Services through Family Resource Centers: Estimated Effect on California Standardized Test Score Changes, 2003-04 to 2004-05: subpopulation analyses

  27. YDA Process of Question Generation, Analysis & Use

  28. Lessons from the YDA • Expertise located in the “neutral middle” strategically important • Oversight and buy in from data contributors essential and takes time • Costs are front-loaded– once relational data base established not expensive to maintain • Progress episodic; vulnerable to context shifts especially in the beginning • Most difficult challenges are political, not technical or conceptual • Patience– comfort with slow buy in and anxieties; attention to ‘comfort zones’

  29. Summing up… • Cross domain indicators exist at the individual level– but expensive to collect outside status information from administrative data sets • Public administrative data feature ‘deficit’ indicators– the anti’s • At both individual & setting levels, challenge to develop and fund parsimonious, reliable menu of indicators • System-level indicators need development—requires specifying a theory of change • Process matters– stakeholders need to agree on the legitimacy and relevance of an indicator; no shortcuts

  30. Biggest challenge facing evaluation of an comprehensive educational equity agenda Securing political and other supports for an integrated data system to monitor, assess and inform a complementary learning system

  31. Evaluating Comprehensive Equity Projects: An Overview Milbrey McLaughlin Stanford University November 17 2008

More Related