310 likes | 481 Vues
EVAL 6000: Foundations of Evaluation. Dr. Chris L. S. Coryn & Carl D. Westine October 21, 2010. Agenda. Eat Role play exercise Scriven’s “hard-won lessons” Evaluation models and approaches part I Questions to consider Stufflebeam’s classification system Pseudoevaluations
E N D
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn & Carl D. Westine October 21, 2010
Agenda • Eat • Role play exercise • Scriven’s“hard-won lessons” • Evaluation models and approaches part I • Questions to consider • Stufflebeam’s classification system • Pseudoevaluations • Question- and method-oriented approaches • Question about the mid-term exam, or other questions
A Note on My Schedule • I leave for India on October 24 and return on November 4 • I leave for the AEA conference on November 8 and return November 13 • If you need anything please send questions by e-mail during those periods
Role Playing as Major Theorists • Each of you has been assigned the role of a particular theorist • In playing that role you will be asked to discuss the following from the point of view of your assigned theorist: • What is the purpose of evaluation? • What is the role of the evaluator? • Is it the job of evaluators to make sure that evaluations are used? If so, why and how? • What is the relevance, if any, of a theory of social programming to evaluation? • What type of knowledge claims are central to evaluation, and why?
Hard-Won Lessons • Over the coming weeks, we will briefly introduce and discuss some of Scriven’s “hard-won lessons” (1993), so read the monograph carefully • Questions to consider for these discussions might include: • Are these issues still relevant? • Are there solutions where problems are identified? • Do they really matter in terms of evaluation practice or are they academic arguements?
Questions for the Coming Weeks • As we work through alternative models and approaches for evaluation over the next few weeks, keep the following questions in mind for discussion • How do these models and approaches differ from earlier theories? • How are these models and approaches similar to earlier theories? • What reasons might explain the development of these models and approaches? • Where do the theorists in “Evaluation Roots” fit in the Stufflebeam taxonomy? • Are these models and approaches really improvements over earlier theories? • Why or why not?
Stufflebeam’s Classification • General classification scheme • Pseudoevaluations • Questions- and methods-oriented • Improvement- and accountability-oriented • Social agenda and advocacy • Eclectic
Pseudoevaluations • Shaded, selectively released, overgeneralized, or even falsified findings • Falsely characterize constructive efforts—such as providing evaluation training or developing an organization’s evaluation capability—as evaluation • Serving a hidden, corrupt purpose • Lacking true knowledge of evaluation planning, procedures, and standards • Feigning evaluation expertise while producing and reporting false outcomes
Pseudoevaluations • Approach 1: Public relations-inspired studies • Begins with the intention to use data to convince constituents that an evaluand is sound and effective • Typically presents an evaluand’s strengths, or an exaggerated view of them, but not its weaknesses
Pseudoevaluations • Approach 2: Politically controlled studies • Can be defensible or indefensible • Illicit if • Withholds findings from right-to-know audiences • Abrogates a prior agreement to fully disclose findings • Biases message by reporting only part of the findings
Pseudoevaluations • Approach 3: Pandering evaluations • Cater to client’s desires for a certain predetermined conclusion regardless of an evaluand’s actual performance • Evaluator seeks the “good graces” of the client • Puts evaluator in a favored position to conduct additional evaluations in the future
Pseudoevaluations • Approach 4: Evaluation by pretext • Client misleads evaluator as to the evaluation’s true purpose • Evaluator does not investigate or confirm the true purpose
Pseudoevaluations • Approach 5: Empowerment under the guise of evaluation • External evaluator attempts to empower a group to conduct its evaluations (as advanced as those of external or independent evaluation) • Gives the evaluees the power to write or edit reports giving the illusion that they were written or prepared by an independent external evaluator • The main objective is to help the evaluee group maintain and increase resources, empower them to conduct and use evaluation to serve their interests, or lend them sufficient credibility to make their evaluations influential
Pseudoevaluations: Addition • Appreciative inquiry is aimed at determining what is best about an evaluand (or other object of inquiry) and is premised on the assumption that positive feedback motivates positive performance • Key theorist: Donna Mertens/HalliePreskill • Rooted in the transformative paradigm of social inquiry • The paradigm uses research (or evaluation) to improve conditions for marginalized groups • Also rooted in social constructionism/constructivism (i.e., that reality is constructed and only the knower/perceiver is capable of knowing that reality)
Pseudoevaluations: Addition • Based on five principles • Knowledge about an organization (or any object) and the destiny of that organization are interwoven • Inquiry and change are not separate but are simultaneous—inquiry is an intervention • The most important resources we have for generating constructive organizational change or improvement are our collective imagination and our discourse about the future • Human organizations are unfinished books—an organization’s story is continually being written by the people within the organization, as well as by those outside who interact with it • Momentum for change requires large amounts of both positive affect and social bonding—things such as hope, inspiration, and the sheer joy in creating with one another
Questions- and Methods-Oriented • Address specific questions (often employing a wide range of methods)—questions-oriented • Typically use a particular method (methods-oriented) • Whether the questions or methods are appropriate for assessing merit and worth is a secondary consideration • Both (i.e., questions and methods) are narrow in scope and often deliver less than a full assessment of merit and worth
Questions- and Methods-Oriented • Approach 6: Objectives-based studies • Some statement of objectives serves as the advance organizer • Typically, an internal study conducted in order to determine if the evaluand’s objectives have been achieved • Operationalize objectives, then collect and analyze information to determine how well each objective was met
Questions- and Methods-Oriented • Approach 7: Accountability, particularly payment-by-results studies • Typically narrows evaluation to questions about outcomes • Stress importance of obtaining external, impartial perspective • Key components include pass-fail standards, payment for good results, and sanctions for unacceptable performance
Questions- and Methods-Oriented • Approach 8: Success case method • Evaluator deliberately searches for and illuminates instances of success and contrasts them to what is not working • Compares least successful instance to most successful instances • Intended as a relatively quick and affordable means of gathering important information for use in improving an evaluand
Questions- and Methods-Oriented • Approach 9: Objective testing programs • Testing to assess the achievements of individual students and groups of students compared with norms, standards, or previous performance
Questions- and Methods-Oriented • Approach 10: Outcome evaluation as value-added assessment • Recurrent outcome and value-added assessment coupled with hierarchical gain score analysis • Emphasis on assessing trends and partialling out effects of the different components of an educational system, including groups of schools, individual schools, and individual teachers • The intent is to determine what value each is adding to the achievement of students
Questions- and Methods-Oriented • Approach 11: Performance testing • Devices that require students (or others) to demonstrate their achievements by producing authentic responses to evaluation tasks, such as written or spoken answers, musical or psychomotor presentations, portfolios of work products, or group solutions to defined problems • Performance assessments are usually life-skill and content-related performance tasks so that achievement can be demonstrated in practice
Questions- and Methods-Oriented • Approach 12: Experimental studies • Random assignment to experimental or control conditions and then contrasting outcomes • Required assumptions can rarely be met • As a methodology, addresses only a narrow set of issues (i.e., cause-and-effect)—insufficient to address the full range of questions required to assess an evaluand’s merit and worth • Unbiased estimates of effect sizes
Questions- and Methods-Oriented • Approach 13: Management information systems • Like politically controlled studies, they supply information needed to conduct and report on an evaluand—as opposed to supplying information to win political favor • Typically organized around objectives, specified activities, projected milestones or events, and budget • For example, Government Performance and Results Act (GPRA) of 1993 and Performance Assessment Rating Tool (PART)
Questions- and Methods-Oriented • Approach 14: Benefit-cost analysis • Largely quantitative procedures designed to understand the full costs of an evaluand and to determine and judge what investments returned in objectives achieved and broader societal benefits • Compares computed ratios to those of similar evaluands • Can include cost-benefit, cost-effectiveness, cost-utility, return on investment, rate of economic return, etc.
Questions- and Methods-Oriented • Approach 15: Clarification hearing • A label for the judicial approach to evaluation • Essentially puts an evaluand on trial • Role-playing evaluators implement a prosecution and defense • Judge hears arguments within the framework of a jury trial • Intended to provide balanced evidence on an evaluand’s strengths and weaknesses
Questions- and Methods-Oriented • Approach 16: Case study evaluations • Focused, in-depth description, analysis, and synthesis • Examines evaluand in context (e.g., geographical, cultural, organizational, historical, political) • Mainly concerned with describing and illuminating an evaluand, not determining merit and worth • Stake’s approach differs dramatically from Yin’s
Questions- and Methods-Oriented • Approach 17: Criticism and connoisseurship • Grew out of methods used in art and literary criticism • Assumes that certain experts are capable of in-depth analysis and evaluation that could not be done in other ways • Based on evaluator’s special expertise and sensitivities • Methodologically, uses perceptual sensitivities, past experiences, refined insights, and ability to communicate assessment
Questions- and Methods-Oriented • Approach 18: Program theory-based evaluation • Centered around • A theory of how an evaluand of a certain type operates to produce outcomes, or • An approximation of such a theory within the context of a particular evaluand • Less concerned with assessment of merit and worth, more concerned with understanding how and why a program works and for whom • We’ll come back to this in detail in a few weeks
Questions- and Methods-Oriented • Approach 19: Mixed-methods studies • Combines quantitative and qualitative techniques • (mixed) Methods-oriented • Less concerned with assessing merit and worth, more concerned with “mixing” methodological approaches • A key feature is triangulation • Aimed at depth, scope, and dependability of findings
Encyclopedia Entries for this Week • Fourth-generation evaluation • Objectives-based evaluation • Participatory evaluation • Realist evaluation • Realistic evaluation • Responsive evaluation • Theory-driven evaluation • Utilization-focused evaluation