1 / 19

Consequential Validity

Consequential Validity Mountain Plains Regional Resource Center Teleconference Series May 30, 2007 Elizabeth Towles-Reeves Jacqui Kearns National Alternate Assessment Center (NAAC) Validity Should be Central

benjamin
Télécharger la présentation

Consequential Validity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Consequential Validity Mountain Plains Regional Resource Center Teleconference Series May 30, 2007 Elizabeth Towles-Reeves Jacqui Kearns National Alternate Assessment Center (NAAC)

  2. Validity Should be Central • We argue that the purpose of the technical documentation is to provide data to support or refute the validity of the inferences from the alternate assessments at both the student and program level. • Learning from Kane, we view the validity evaluation as building an argument (e.g., legal, philosophical) to support or refute the inferences regarding the scores from the AA-AAS.

  3. The Challenge of Documenting Technical Quality of Alternate Assessments • heterogeneity of the group of students being assessed and how they demonstrate knowledge and skills; • often “flexible” assessment experiences; • relatively small numbers of students/tests; • evolving view/acceptance of academic curriculum (i.e., learning experiences; • the high degree of involvement of the teacher/assessor in administering the assessment (Gong & Marion, 2006); • non-traditional assessment approaches (e.g., portfolios, performance tasks/events) for which there is a need to expand the conceptualization of technical quality to evaluate these approaches (Linn, Baker, & Dunbar, 1991).

  4. Expanding Technical Quality • Linn, et al. (1991) pointed out that we already (15 years ago!) have the theoretical tools for expanding validity investigations, but in practice validity is usually viewed too narrowly. • Content frameworks are described, and specifications for the selection of items are provided for standardized achievement tests. Correlations with other tests and sometimes with teacher assessments of achievement may also be presented. Such information is relevant to judgments of validity but does not do justice to the concept (p. 16). • We argue that AA-AAS technical evaluations are suffering the same fate.

  5. Shepard (1993) • Shepard (1993) advocated a straightforward means to prioritize validity questions. Using an evaluation framework, she proposed that validity studies be organized in response to the questions: • What does the testing practice claim to do; • What are the arguments for and against the intended aims of the test; and • What does the test do in the system other than what it claims, for good or bad? (Shepard, 1993, p. 429). • The questions are directed to concerns about the construct, relevance, interpretation, and social consequences, respectively. • We believe that this approach for prioritizing our questions is useful.

  6. INTERPRETATION OBSERVATION COGNITION The Assessment Triangle and Validity Evaluation • VALIDITY EVALUATION • Empirical Evidence • Theory and Logic (argument) • Consequential Features • Reporting • Alignment • Item Analysis/DIF/Bias • Measurement Error • Scaling and Equating • Standard Setting • Assessment System • Test Development • Administration • Scoring • Student Population • Academic Content • Theory of Learning

  7. Questions • After reviewing the materials on the website from the Inclusive Assessment Seminars, what questions do you have before we begin looking at the consequential validity features of AA-AAS systems?

  8. What is Consequential Validity? • Messick (1989) originally introduced consequences to the validity argument. Later, Shepard (1993, 1997) broadened the definition by arguing one must investigate both positive/negative and intended/unintended consequences of score-based inferences to properly evaluate the validity of the assessment system.

  9. So What? • There is overwhelming support for answering the “So What” question (Haertal, 1999; Kane, 2002; Kleinert et al., 2001; Lane & Stone 2002; Shepard, 1997), but at the same time differing stakeholder views must be included to present a convincing validity argument (Lane & Stone, 2002; Linn 1998; Ryan, 2002).

  10. Intended Consequences • Lane and Stone (2002) suggest that state assessments are intended to impact: • Student, teacher, and administrator motivation and effort; • Curriculum and instructional content and strategies; • Content and format of classroom assessments; • Improved learning for all students; • Professional development support; • Use and nature of test preparation activities; and • Student, teacher, administrator, and public awareness and beliefs about the assessment, criteria for judging performance, and the use of assessment results.

  11. Unintended Consequences • At times, however, Lane and Stone (2002) propose unintended consequences are possible such as: • Narrowing of curriculum and instruction to focus only on the specific learning outcomes assessed; • Use of test preparation materials that are closely linked to the assessment without making changes to the curriculum and instruction; • Use of unethical test preparation materials; and • Inappropriate use of test scores by administrators.

  12. Consequential Validity Evaluation Questions • Before you consider investigating any consequential validity questions for your alternate assessment judged against alternate achievement standards (AA-AAS), you must determine: • What is the purposeof the AA-AAS? • How will the scores of the AA-AAS be used? • What stakeholders are important to helping you understand the consequences of the AA-AAS: students, parents, teachers, administrators, community members, experts?

  13. Consequential Validity Evaluation Questions • Once you determine purpose and use, you may then ask: • What are the intended and unintended consequences based on the purpose and use of the AA-AAS? • Are the intended and unintended consequences positive or negative?

  14. Looking to our Past to Prepare for the Future • Research on the consequential validity of alternate assessments from the perspective of: • Students/Parents • Research Questions: • What benefits to students have accrued from the participation in AA-AAS? • What is the extent to which students have accessed the general education curriculum? • What is the impact of the AA-AAS on students’ IEP development? • What is the relationship between student performance in AA-AAS and post-school life outcomes? • What student, teacher, and instructional variables influence parents’ perceptions regarding the AA-AAS?

  15. Looking to our Past to Prepare for the Future • Research on the consequential validity of alternate assessments from the perspective of: • Teachers • Research Questions: • What benefits to teachers have accrued from the participation of students in the AA-AAS? • What is the extent to which alternate assessments are a part of daily classroom routine? • What is the relationship between alternate assessment scores and the amount of time spent working on the assessment? • To what extent do teacher and instructional variables predict alternate assessment scores? • Which student, teacher, and instructional variables influence teachers’ perceptions regarding the AA-AAS? • What is the impact of the AA-AAS on teachers’ daily instruction?

  16. Looking to our Past to Prepare for the Future • Research on the consequential validity of alternate assessments from the perspective of: • School • Research Questions: • To what extent are students included in the accountability process? • Is there any relationship between student performance in the AA-AAS and student performance in the general assessment?

  17. Prioritization • There is no way that a state can take on investigating all these research studies at once. • How could you go about prioritization of studies? • Gather stakeholders • Work through guided discussion regarding what studies may be important to conduct based on stakeholder input • Prioritize the top 2 studies for the short-term (next 2-3 years) and then prioritize the top 2 studies for the long-term (next 3-5 years)

  18. Questions • Any questions or clarifications?

  19. Questions • Have you gathered a stakeholder group to think about and prioritize consequential validity questions for the short-term and long-term? If so, who was involved? If not, what stakeholders do you think will be important to have at the table? • What consequential validity studies have you performed in your state (i.e., looking at consequences related to students, teachers, and schools)? • What studies would you like to perform but are unsure as to how to gather the data or conduct the study?

More Related