900 likes | 1.08k Vues
Making Assessments Matter to Students. Andrew Jones | Donna Sundre | Peter Swerdzewski | Carol Barry | Abby Lau. Low-Stakes Assessment, Student Motivation, and the Validity of Scores: A General Introduction. Making Assessments Matter to Students. Overview
E N D
Making Assessments Matter to Students Andrew Jones | Donna Sundre | Peter Swerdzewski | Carol Barry | Abby Lau
Low-Stakes Assessment, Student Motivation, and the Validity of Scores: A General Introduction
Making Assessments Matter to Students • Overview • Provide a general introduction to the concepts of low-stakes testing, student motivation, and the validity of scores • Provide an overview for the remainder of the session
The Need for Assessment • Accountability has led to an increased demand for universities to engage in assessment • Spellings Commission • Accreditation Agencies • State Councils
The Need for Assessment • Universities and stakeholders are using these assessments to make decisions about the effectiveness of programs • Should we be concerned with how this data is collected and in turn, how these scores are used?
Motivation and Student Scores • Often, it is assumed that students put forth effort in an assessment situation • This is typically the case when stakes are attached to the testing situation • Classroom tests, quizzes • SAT, GRE • If student does poorly, there will be negative consequences • Not admitted to colleges, graduate school • Conversely, there are rewards for doing well
Motivation and Assessment • Students can also be tested in situations where they do not perceive personal consequences • This results in a “low-stakes” testing situation
Motivation and Assessment • What is a low-stakes testing situation? • Scores on the assessment do not impact the student • No gain as a result of doing well • No punishment for doing poorly • Students may not even receive feedback as to how they perform on the assessment
Motivation and Assessment • Many assessments in higher ed contexts can be categorized as “low-stakes” assessment • Especially the case for assessments administered for program evaluation or accountability
Motivation and Assessment • How do these low-stakes settings impact scores? • Students may not put forth as much effort in comparison to a “high-stakes” setting • Unmotivated students will not score as high on achievement tests (Wise & DeMars, 2005)
Why Does Reduced Effort Matter • If students are not putting forth effort, then they are not truly representing what they know or who they are • Students may have increased greatly in a specific competency, but their scores would not reflect this due to a lack of effort on the items • Students attitudes may have changed, but their scores would not reflect this because they have not attended properly to the instrument
Why Does Reduced Effort Matter? • Impacts the validity of scores from an assessment • What do we mean by validity? • Conceptually • May not be measuring what we say we are measuring • Essentially, scores are not as useful • Scores are not providing stakeholders with the information that they intended to gather!
Motivation and Assessment • Why not just make all higher ed assessments high stakes tests? • Test Anxiety • Legal Issues • Creates a much more political environment • Typically, higher ed institutions are not interested in assessment at the individual level • Structure of universities are not geared toward high stakes assessment
Motivation and Assessment • Given that low-stakes assessment will continue to be used in higher ed, we need to be concerned about this issue of motivation • Critical to assess motivation and find ways to improve motivation, as this impacts the inferences we make about programs
Overview of Presentations Today • Providing an overview for JMU’s model of assessment (Donna Sundre) • Providing one method to measure motivation • What do students think of assessment and how does this impact motivation? (Peter Swerdzewski) • How does control over the assessment environment impact student scores? (Carol Barry) • What impact can proctors have on student motivation in low-stakes settings? (Abby Lau) • General recommendations to increase motivation and the usefulness of test scores in low-stakes settings • Questions and discussion
Assessment at JMU Donna Sundre
The Assessment Culture at JMU JMU requires students to take a series of student outcomes assessments prior to their graduation. These assessments are held at four stages of students’ academic careers: as entering first-year students at the mid-undergraduate point when they have earned 45 to 70 credit hours, typically the sophomore year as graduating seniors in their academic major(s) Students will also complete an alumni survey after graduation -JMU Undergraduate Catalog
The Assessment Culture at JMU CARS supports all general education assessment CARS administers all JMU alumni surveys CARS supports assessment for every academic program CARS supports assessment for the Division of Student Affairs All programs must collect and report on assessment data annually Academic Program Reviews are scheduled every 6 years for every major degree program graduate and undergraduate
The Assessment Culture at JMU Long-standing and pervasive expectation at JMU that assessment findings will guide decision-making. Annual reports, Assessment Progress Templates, program change proposals, and all academic program review self-study documents all require substantial descriptions of how Assessment guides decision-making The Center for Assessment and Research Studies (CARS) is the largest higher education assessment center in the US with 10 Faculty, 3 Support Staff, and 15 Graduate Assistants
Data Collection Strategies Two institution-wide Assessment Days Fall (August): Incoming freshmen tested at orientation Spring (February): Students with 45-70 credits ; typically the sophomore year Classes are cancelled on this day All students are required to participate, else course registration is blocked Students are randomly assigned using the last two digits of their JMU ID number to testing rooms where a particular series of instruments are administered This results in large, representative samples of students Student ID numbers do not change; therefore, we can assure that students complete the same instruments at time 2 as they did at time 1 JMU just completed its 23rd Spring Assessment Day The Spring Assessment Day is also used by many majors to collect data on their graduating seniors
Data Collection Scheme:Repeated Measures COHORT 1 Students in each cohort are tested twice on the same instrument – once as incoming freshmen and again in the second semester of thesophomoreyear. COHORT 2 COHORT 3
What is Learning Assessment? Assessment is the systematic basis for making inferences about the learning and development of students.
Stages of the Assessment Process Establishing Objectives Selecting/ Designing Instruments Using Information Continuous Cycle Collecting Information Analyzing/ Maintaining Information
Not Just Any Data Will Do… If we want faculty to pay attention to the results, we need credible evidence To obtain credible evidence: We need a representative sample or a census We need good instrumentation The tasks demanded must represent the content domain Reliability and validity We need students who are motivated to perform
Prerequisites for Quality Assessment We must have three important components Excellence in sampling of students Either large, representative student samples or a census Sound assessment instrumentation Reliable, valid assessment methods Instruments that faculty find meaningful Motivated students to participate in assessment activities Can we tell if students are motivated? Can we influence examinee motivation?
Fulfilling the Prerequisites Excellence in sampling of students Using our Assessment Day design, we can achieve this Sound assessment instrumentation Working collaboratively with departmental faculty our CARS liaisons can facilitate identification or development of sound tools Motivated students to participate in assessment activities Can we tell if students are motivated? YES! Can we influence examinee motivation? YES!
Student Opinion Scale (SOS) This is a 10-item instrument: provides two scores Importance-perceived importance of the task(s) Effort-Examinee self-report of level of effort expended in task completion Both measures result in reliability estimates in mid .80s SOS scores are NOT correlated with SAT scores! This instrument, scoring instructions, and manual are freely available and downloadable from www.jmu.edu/assessment/
Student Opinion Scale (SOS) • Responses are on a 5-point Likert scale • Strongly Disagree to Strongly Agree • Sample Items • Effort: I gave my best effort on these tests • Importance: Doing well on these tests was important to me
Using the SOS Scores, we have Described and quantified the level of our students’ motivation Shared this information with our faculty Included SOS scores in our data analysis Positively impacted student motivation levels Improved our proctor selection and training My colleagues will provide details on this work!
Peter Swerdzewski Making Assessment Matter to Students: Exploring Examinees’ Perceptions of Assessment
The Need to Consider Perception • Students are highly-tested by the time they reach college… “America's public schools administer more than 100 million standardized exams each year, including IQ, achievement, screening, and readiness tests.” (FairTest, 2007) • Students’ perceptions about testing do influence the validity of the inferences we can make about their test scores.
Questions of Interest • What do students know and how do they feel about low-stakes institution-wide learning outcomes assessment? • How does this knowledge and affect contribute to test-taking motivation? • What do students suggest can be done to increase the validity of the inferences that college administrators can make from low-stakes assessments?
Methods • Focused on JMU’s Assessment Day tests • Data used to assess general education and student affairs programs • All freshmen and sophomores must participate in Assessment Day • Used two approaches to collect qualitative data: • Constructed responses from a Web-based survey • Focus groups • A modified grounded-theory mixed-methods approach was used to evaluate responses
Results: Question #1 • What do students know and how do they feel about low-stakes institution-wide learning outcomes assessment?
“Yes” • “Yes. Assessment Day gives the faculty of JMU the opportunity for insight into what their students are learning and thinking. It also provides a social blueprint in terms of how students think of themselves and others. If there were no assessments, JMU administrators would have nothing off of which to base the structuring of courses provided here or the way in which they are taught.” • “Justin” • “It is always helpful to reflect on the past and to see how I can stay focused on my goals.” • “Abigail”
“Yes, but…” • “It probably is, so I put forth the effort, but to me personally I doubt it will affect me at all.” • “Neil” • “I guess it's valuable in the long run. It doesn't seem very valuable because we do not hear much about it after we finish taking the tests. If the results and data were actually shown to us in an interesting way then it would seem more valuable.” • “Rachel”
“No” • “No. It has no bearing on anything.” • “Kevin” • “No. We have been taking assesments all our lives. Enough is enough.” • “Timothy” • “Because of our individualist society people feel the need to compare themselves to everyone else. JMU obviously wants to see how well their students are doing so that they may compare its overall achievement as a university to other universities. Because this is the accepted way to measure acheivement[,] Assesment day is valuable. But in [no] way does it enrich the soul, which is the true purpose of education.” • “Ben”