1 / 23

Value-Added Assessment

Value-Added Assessment. Group A Phyllis Abdur-Rashed Tikitia Glover Samaria Joyner Christine Rinehart. Origins. Dr. William Sanders created the assessment to align with school curriculum with the ability to measure both high-achieving and low-achieving children with reliable results

abdalla
Télécharger la présentation

Value-Added Assessment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Value-Added Assessment Group A Phyllis Abdur-Rashed Tikitia Glover Samaria Joyner Christine Rinehart

  2. Origins • Dr. William Sanders created the assessment to align with school curriculum with the ability to measure both high-achieving and low-achieving children with reliable results • Sanders argues that teachers’ effects are cumulative and additive and, • Beginning teachers are less effective than experienced teachers

  3. What is Value-Added Assessment? • Method of analyzing data used to predict how students will perform in a given year- whether they have made expected, less than expected, or more than expected progress (CGP, Value-Added Assessment) • Measures learning like a growth chart with a statistical prediction of how much learning growth should occur; this growth should occur regardless of race, SES, language and other factors- not because they are being ignored, but because they are factored in (Mahoney, 2006) • Measures learning and teaching- how teachers, schools, and districts specifically contributed to learning (CGP, Value-Added Assessment)

  4. What is Value-Added Assessment? • The difference between students' knowledge and skills at entry and their knowledge and skills at the time of graduation (Pickering & Bowers, 1990)

  5. Strengths • Entry level data to aid in academic advising and placement of students into remedial courses • When value-added assessments are tailored to the institution, inappropriate interinstitutional comparisons can be avoided • Value-added can be used as a longitudinal assessment model to enable evaluation of whether or not change is lasting • Helpful in improving curricula and instruction (Pickering & Bowers, 1990)

  6. Additional Strengths • The usefulness of value-added data to students to help them better understand and develop their own talents • The usefulness of value-added data in measuring the impact of the institution on students • Students becoming better test takers from the feedback they receive as well as monitoring their own growth and development (Pickering & Bowers, 1990)

  7. More Highlights… • As a concept for evaluating quality, value added is well received not only in higher education but in many other sectors, including elementary and secondary education, health care, and business" (Fulcher & Willse, 2007) Where should the first quotation mark go in this? I’m guessing at the beginning but do not want to assume • Focus is on the individual student- the current learning of the student provides a baseline from which expected growth is projected; a better than expected future score indicates highly effective instruction (CGP, Value-Added Assessment, 2004) • A lower than expected score would indicate poor instruction unless other students in similar circumstances scored well (CGP, Value-Added Assessment)

  8. Highlights continued… • Authors of a report issued by RAND reviewed several studies indicating that teacher effectiveness is a significant contributor to student learning and that the effects of a high quality teacher persist over several years. Although many problems existed with the individual reports, taken as a whole it is probably true that teacher effects exist and may persist but the degree is unknown (McCaffrey, Lockwood, Koretz & Hamilton, 2003)

  9. NCLB Comparison • NCLB focuses on groups of students; value-added focuses on students themselves • NCLB compares this year’s 4th graders with last year’s 4th graders; value-added compares individual student growth • NCLB provides no rewards for success and gives no credit for improvement; value-added showcases levels of improvement even when AYP is not met • NCLB does not address ineffective teachers; value-added identifies strategies and teachers that are both effective and ineffective (CGP, Value-Added Assessment)

  10. Primary Contention with NCLB • Teachers cannot control how the students come in to the classroom and should not be responsible for moving all students to a predesignated standard; however • All teachers should be responsible for contributing to significant learning growth regardless of the student’s starting point (Mahoney, 2006)

  11. Best Value-Added Descriptions • Preliminary • Potential • Promising (Bracey, 2006)

  12. 2003 RAND Report • Not enough research has been done on Value-Added Measurement (VAM) to support its own place as a high-stakes measurement tool • VAM does avoid cohort-to-cohort differences unlike the NCLB testing requirements, but • Bias issues exist • Sampling error issues exist including those related to class size and availability of test-score data • Achievement tests are given only once a year and cannot measure all topics (McCaffrey, Lockwood, Koretz & Hamilton, 2003)

  13. Pickering & Bowers, 1990 • Thirteen years prior to the RAND report, Pickering & Bowers issued many of the same cautions and warnings • Bias • Pretest-postest design flaws • Sampling errors • Lack of a control group

  14. Other Criticisms • Multiple methods are available for computing value-added results; some are simple, most require computerized calculations requiring expertise beyond most skill districts. Some methods are better than others but none are perfect (Olson, 2004)

  15. Missing data is treated as random (assumed to be the same average growth as peers); this is probably incorrect as students likely to have missing data are also more likely to be transient with associated problems of poverty, limited English skills, and less exposure to learning experiences (Olson, 2004)

  16. If measuring for growth as a result of teacher and school effectiveness, other variables such as ethnicity, race, socioeconomic status and other factors that influence how well and how fast students learn need to be controlled (Olson, 2004) • In a widely diverse student body, controlling for these factors may not be necessary, but no one knows how much diversity/integration is needed (Olson, 2004) • Even when controlling for student characteristics, differences in teacher quality may be hidden when all students are of a similar background (e.g. low SES) (Olson, 2004)

  17. Issues still exist with teacher professionalism; teachers who “teach to the test” directly rather than to the content standards will appear more effective than those who teach the curriculum (Bracey, 2006)

  18. Tennessee • Called TVAAS = Tennessee Value-Added Assessment System • Testing grades 3-8 in math, reading, language arts, science, and social studies • Improvement on the National Assessment of Education Progress (NAEP) is credited to TVAAS as is an 8% increase in math and science scores

  19. North Carolina • Average score of students over two years is the basis for the North Carolina value-added system • Low performing schools are assigned teams to assist in improving student achievement • High performing schools (and their teachers) are rewarded financially

  20. Recommendations • From the 2003 authors of the RAND report • Districts using VAM must release much more data so that outside analyses may be conducted • Develop databases to compare school districts • Use other measures of teacher effectiveness and compare with VAM • Analyze possible sources of error (such as bias), analyze sensitivity, and conduct meta-analyases (McCaffrey, Lockwood, Koretz & Hamilton, 2003)

  21. Works Cited Bracey, G. W. (2004). Serious questions about the Tennessee value-added assessment system [Electronic Version]. Phi Delta Kappan, 85(9), 716-717.Bracey, G.W. (2006). Value-added models, front and center [Electronic Version]. Phi Delta Kappan, 87(6), 478-481. Center for Greater Philadelphia. (n.d.) North Carolina value-added assessment Operation Public Education. http://www.cgp.upenn.edu/ope_nc.html Center for Greater Philadelphia. (n.d.). Value-added assessment. Operation Public Education.http://www.cgp.upenn.edu/ope_value.htmlCenter for Greater Philadelphia. (n.d.). Value-added assessment in Tennessee. Operation Public Education. http://www.cgp.upenn.edu/ope_tn.html

  22. Fulcher, K.H., Willse, J.T. (2007). Value-added: Back to basics in measuring change [Electronic Version]. Assessment Update, 19(5), 10-12. Mahoney, J. W. (2006). How value-added assessment helps improve schools [Electronic version]. Edge, 1(4) 3-18.McCaffrey, D. F., Lockwood, J., Koretz, D. and Hamilton, L. (2003). Evaluating value-added models for teacher accountability (MG- 158). Santa Monica, CA: RAND Corporation. Olson, L. (2004). Tennessee reconsiders value-added assessment system [Electronic Version]. Education Week,23(5), 9.Olson, L. (2004). Researchers debate merits of value added measure [Electronic Version]. Education Week,24(12).

  23. Pickering, J., Bowers, J. (1990). Assessing value-added outcomes assessment [Electronic Version]. Measurement & Evaluation in Counseling & Development, 22(4) 1-8. Pipho, C. (1998). The value-added side of standards [Electronic Version]. Phi Delta Kappan, 79(5), 341-343. Sanders, W.L. (2004). Compounding errors [Electronic Version]. Phi Delta Kappan, 86(2), 174-175.

More Related