1 / 19

2) Have to grapple with fundamental questions in education: What is desired outcome of education?

nelia
Télécharger la présentation

2) Have to grapple with fundamental questions in education: What is desired outcome of education?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. One View: “The strength of performance incentives is their ability to deal with complexity. By rewarding participants in the educational process when they do well and penalizing them when they do poorly, schools can harness the energy, ability, and inventiveness of individuals…Developing and employing incentive structures is not in itself easy…Nonetheless, performance incentives remain the best hope for getting on a path of long-run improvement.” Eric Hanushek, et al., 1994. Making School Work: Improving Performance and Controlling Costs. Washington, DC: The Brookings Institution. pp. 87-88.

  2. An Alternate View: “Despite the long history of assessment-based accountability, hard evidence about its effects is surprisingly sparse, and the little evidence that is available is not encouraging. There is evidence that effects are diverse, vary from one type of testing program to another, and may be both positive and negative. The large positive effects assumed by advocates, however, are often non substantiated by hard evidence, and closer scrutiny has shown that test-based accountability can generate spurious gains--thus creating illusory accountability and distorting program effectiveness—and degrade instruction.” Daniel Koretz. 1996. “Using Student Assessments for Educational Accountability.” in E. Hanushek and D. Jorgenson (eds.) Improving America’s Schools: The Role of Incentives, Washington DC: National Academy Press., p. 172.

  3. The basic skeleton of accountability systems involves goals, standards for performance, measurement, and consequences 1) Key components of performance management: • Focus on performance not inputs • Set clear, measurable goals (state standards) • Develop measures linked to performance standard • Provide school adequate resources and flexibility over use of resources. • Measure performance • Hold school administration and staff accountable

  4. 2) Have to grapple with fundamental questions in education: • What is desired outcome of education? • To what standards do we wish to hold our students? • Is it fair to expect the same levels of achievement from students from a variety of backgrounds? • Do schools have the ability to raise low performing students to high standards if they are given proper resources and incentives? • Are public schools as efficient in their use of resources as private schools?

  5. Two Basic Types of Accountability • Hold students accountable for their performance before they are promoted (or graduate) – High stakes testing. • Hold schools and districts accountable for the gains in student achievement – School accountability programs. These two goals are clearly related but are not the same thing.

  6. Assessment Issues • What is appropriate measure of success? • What are the goals of assessment? • What assessment choices exist? • What to test? • Who to test? • How do tests fit between teacher discretion and learning standards? • What should be in learning standards?

  7. Accountability Issues • What could be the stakes in high stakes testing? • What are the potential negative effects of testing? • What are the objectives of a school accountability system? • Should accountability control for student SES? • What are the consequences of accountability system? • What do we know about accountability systems in practice?

  8. What is the Appropriate Measure of Success? • Student performance on achievement tests. • What subjects? • What grades? • What type? • Performance on graded assignments (school or district grades) • “Authentic assessment”—use of a variety of measures including actual test results, performance on actual assignment, participation in related extra-curricular activity.

  9. What Are Goals of Student Assessment? • Student diagnostic (low stakes testing): Identify students that are not reaching minimum standards so that they can receive additional help. • Student incentives (high stakes testing): Identify students reaching/not reaching standards to determine who is promoted. • Program evaluation: Evaluate programs and curricula on whether they are effective in providing certain material. • School incentives: Evaluate contribution of school to student performance so that appropriate incentives can be provided. • Can one type of test do all of this?

  10. Achievement Tests are a Sample “The single most important fact about achievement tests is that most are small samples from large domains of achievement. Performance on the test itself, therefore is or at least ought to be of little interest. Rather what is important is student’s mastery of the domain that the test is intended to represent. Thus, the results of most achievement tests are meaningful only if one can legitimately generalize from performance on the sample included on the test to mastery of the domain it is intended to represent…The critical fact is simply that tests are typically small samples of domains.” Koretz, 1996. pp. 174-175.

  11. What Are Your Choices in Developing an Achievement Test? • Norm-referenced vs. Criterion referenced? • Standardized exam vs. performance assessment? • Off-the-shelf exam vs. exam tailored to state standards? • Minimum competency exam vs. tests of mastery? • Program assessment vs. individual assessment? **Each exam has a different purpose—it is hard to use one test for all goals.

  12. Criteria to Evaluate Tests Validity and Reliability • Validity: Test measures what you want it to. • Content/construct validity: Provides a reasonable accurate picture of the whole domain. • Criterion validity: indicates the relationship between test performance and performance on some other measure that is believed to measure construct. • Reliability: represents the consistency of measurement. Repeat testing of same individuals will produce approximately the same result. Measurement error is random and not large. **Basic tradeoff between validity and reliability!

  13. Other Testing Issues • What to test: “What you test is what you get.” Richard Murnane and Frank Levy. 2001. “Will Standards-Based Reforms Improve the Education of Students of Color? National Tax Journal 54: 407. • Who to test: “Whom you test is who gets taught.” Murnane and Levy, p. 408. • Cross-section or time series? • Student mobility? If student was not in school for full year. • Consistency of testing instrument vs. integrity of the assessment process.

  14. High Stakes Testing • Basic idea. “The central point is simple: far and away the most important determinant of how quickly students learn is the effort of students themselves. It follows that an increase in schools’ expectations of students could have important effects on the quality of public schooling. By establishing a rigorous set of educational standards, schools can create a set of incentives and rewards to promote student learning…In short, if administrators write a test properly, teaching to the test is exactly what teachers should be doing.” Julian Betts. 1998 “The Two-Legged Stool: The Neglected Role of Educational Standards in Improving America’s Schools.” Economic Policy Review (March): 98, 112.

  15. Linking Standards and Testing • High stakes tests should be linked the learning standards developed by the state government. • Evidence suggests that most state assessment exams are only weakly related to their standards. • Actual school curriculums and individual teacher practice may differ significantly from standards and assessment instruments. • How much discretion should teachers be given? Is emphasis on standards reducing teacher professional judgment?

  16. Identifying Low Performing Schools in NYCAmmar, Salwa, Robert Bifulco, William Duncombe, and Ronald Wright. 2000. “Identifying Low-Performing Public Schools,” Studies in Educational Evaluation, 26: 259-287

  17. Consequences of the School Accountability System • Parental reactions: The introduction of school report cards is clearly designed for this purpose. 43 states have report cards, and 35 had ratings by 2003-04. • Rewards: Based on measures of the school effect—removing influence of school and non-school inputs. 20 states had some form of reward program by 2003-04. • How large should the reward be? • Who should it be given to? • What can it be used for? • Is publicity a better motivator than money? Are monetary awards really necessary? • Punishments: • Bad publicity: labeled as low performing schools--there were 23 states with these programs in early 2003. • Require school improvement plan, and maybe an independent audit from outside consultants or state. • Require introduction of a school reform initiative, such as Success for All. • Replacement of the principal and most of the staff (Chicago calls in “reconstituting” the school). • School choice: Allow students and parents the option of leaving the school to go to another district or charter/private school. • State takeover of the school or district. This may involve the state appointing a board of education, superintendent, and replacing many of the central office staff. (As of last year, states closed or took over or reconstituted roughly 55 schools nationally.)

  18. Evidence on Effects of Accountability • Carnoy and Loeb? • Hanushek and Raymond?

More Related