1 / 30

Assessment Focus Theoretical Foundation Methodology & Statistical Analyses Assessment Findings

Instructional Practice and Student Outcomes by Sherri DeBoef Chandler New Mexico Evaluation and Assessment Conference (2008). Assessment Focus Theoretical Foundation Methodology & Statistical Analyses Assessment Findings Recommendations References. purpose of assessment. ✦ to discern

Télécharger la présentation

Assessment Focus Theoretical Foundation Methodology & Statistical Analyses Assessment Findings

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Instructional Practice and Student Outcomesby Sherri DeBoef ChandlerNew Mexico Evaluation and Assessment Conference (2008) Assessment Focus Theoretical Foundation Methodology & Statistical Analyses Assessment Findings Recommendations References

  2. purpose of assessment ✦ to discern if instruction is meeting/exceeding standards of objectives. (Banta, 2004; Breakwell, Hammond & Fife-Shaw, 2002; Burke & Minassians, 2004) ✦ to ask:  “Is what we are doing working? How do we know? What changes do we need to make?” (Rouseff-Baker & Holm, 2004, pp. 30-41)

  3. sample(of students & of instructional outcomes) ✦ a two-year college, ✦an introductory social science course, ✦ criterion-referenced courses, ✦ multiple sections w/ full-time instructors, ✦ fifteen courses each group (30 sections), ✦ 9 a. m. – 3 p. m. sections only, ✦ over three year period, ✦ collected data from 3-6 years previous

  4. student outcomes Figure 1. Student outcomes of 60 sections, over 5 years (N = 1820), w/ sample of 30 sections, over 3 years (N = 900) .

  5. instructional practice variables Classroom goal structure Frequency of feedback ✔ Grading practices ✔ Instruction aligned w/ assessment Instructional efficacy Multiple instructional modalities Multiple assessment modalities Work pace ✔ Other (?) (Bain, 2004; Barefoot & Gardner, 2005;McKeachie, 2002; Pascarella & Terenzini, 2005)

  6. student feedback considerations Identify the goal, practice incremental parts, view feedback, adapt strategies to reach the goal. Feedback needs to be: ✦ based upon standards of competency, (determined collaboratively) ✦ optimistic, ✦ accurate and specific, ✦ frequent. (Brookhart, 1994; Marzano, Pickering & Pollack, 2001; Rinaldo, 2005)

  7. grading practices Course grading practices ✦criterion-referenced: based upon a predetermined point scale aligned with the achievement of competencies, also called standards-referenced or absolute; ✦norm-referenced: derived from ranking the performance of students, increases competition between students, decreases students ability to adapt performance, also known as grading on the curve or relative grading. (Dougherty, Hilberg & Epaloose, 2002; Popham, 2005; Stiggins, 2005; Walvoord & Johnson Anderson, 1998)

  8. syllabi comparisons Table 1: Work pace by instructional groups (N= 2 instructional groups, 826 students).

  9. Figure 2. Performance including attrition by course work pace. (Addison, Best & Warrington, 2006; Johnston & Kristovich, 2000; Marsh & Roche, 2000) too  challenge optimal too  challenge

  10. work pace theory Overloaded students report ✦ few feelings of success, ✦ feel forced to accept less in terms of their learning to manage the quantity of material (Greenwald & Gillmore, 1997; Kember, 2004) Courses with either too  or  challenge ✦ leave students frustrated with ✦ lack of understanding of course content (Harrison, Ryan, Moore, 1996, p. 780)

  11. student evaluation of instruction (SEI) Figure 3. Student learning and performance correlations. (Abrami & D’Appolonia, 1999; Marsh, 1998; Scriven, 1995; Seiler & Seiler, 2002)

  12. course work pace ✦American textbooks cover many more topics than European or Asian texts. ✦ Instructors incorporating comprehensive text content, result in courses where instruction is “shallow and too brief”. ✦ Characteristics of courses that prompt a surface approach to learning include an excessive amount of course material [with the concomitant lack of] opportunity to pursue the subject in depth”. (Andreoli-Mathie, et al., 2002, p. 202; Bracey, 2006, p. 135; Upcraft et al., 2005, p. 248; Wiggins, 1998)

  13. study methodology Practitioner research has been characterized as having the capacity to get at: ✦ underlying meaning for individuals, ✦ documents and settings, ✦ w/ the expectation of building theory, ✦ & promoting needed reforms. (Boggs, 1999; Lundenberg & Ornstein 2004; Rinaldo, 2005; Valsa, 2005; Zeichner & Noffke, 2001)

  14. ✦ This study examined ex post facto data, precluding random assignment of participants. The sample size, over three-years undermines potentially confounding variables (such as SES), that are likely to be spread throughout the population. A larger sample size reduces the potential that confounding variables will become entangled with learning outcomes (Davis & Hillman Murrell, 1993, p. 23). ✦ Multiple data collection points included triangulation of different kinds of records over time (achievement test scores, student grades, course syllabi) to offset the possibility of idiosyncratic data misrepresented as typical data (Lincoln & Guba, 1985).

  15. Figure 4. Sex of students by work pace group (N =826).

  16. Figure 5. Ability of students by work pace group (N =826).

  17. Multiple Regression Correlation (MRC) ✦Unequal groups (number of students in each course / group). ✦Data that is nominal / ordinal (variables consist of nominal & ordinal data). ✦Developed specifically to deal with dichotomous, dependant variables. ✦F-ratio statistic used for significance testingfor non-curvilinear data. (Gillespie & Noble, 1992; Keppel, 1991; McMillan & Wergin, 2007)

  18. Figure 6. Student performance and attrition by instructional groups (N = 900). student rated quality 2.1 student rated quality 4.8

  19. ✦MRC was chosen because multiple instructional & student characteristics could potentially exert a combined influence on the relationship between the instructional & student predictor variables.✦ MRC is robust concerning the violation of the normal distribution requirement as long as variables are independent. ✦ Inspection of scatterplots & histograms indicated no violation of the required assumptions. (Keppel, Saufley & Tokunaga, 1992; Morgan, Gliner & Harmon, 2001)

  20. ✦ The assumptions of MRC (including linearity, independent observations, normally distributed sample w/ few outliers, adequate sample size, etc.) were met for this analysis. ✦The F statistic was utilized to determine if relationships significant (insensitive to violations of assumptions of normality & homogeneity of variances). (Bordons & Abbott, 2005, pp. 427 – 431; Kepler & Zedeck, 1989)

  21. Table 2. Multiple regression of performance by work pace groups (N =826). Step 1 (sex) r = .141, r² = .020; Step 2 (ability) r = .158, r² = .025, r²= .005; Step 3 (work pace) r = .521, r² =.271,r² = .246. * p .05; ** p < .001.

  22. discussion of findings F ( ) equation ✦Performance (including attrition) rates were associated with work pace requirements in courses with criterion- referenced grading practices with a large effect size (r = .521,p .001). ✦The variables of student sex and student ability did NOT emerge as significant predictors either independently or interactively for student performance including attrition outcomes. ✦Course work pace accounted for 24% of the variation in student performance including attrition (leaving 76% of the variability in student outcomes unaccounted for in this study).

  23. recommendations ✦ Examining the distribution of student outcomes by course and instructor may reveal more about instructional practices and student outcomes in educational contexts than the analyses of aggregate course and program data. ✦This study points to the necessity of identifying course work pace and type of course grading methods and including these instructional practices as predictor variables for any meaningful assessment of student outcomes.

  24. standardization ✦Faculty must work together to standardize course competencies across sections, grades need to be aligned with levels of course competencies in alignment with outstanding universities in their geographical area. (Darling Hammond, 2000, Friedlander & Serban, 2004; Popham, 2005) ✦ Course standardization is necessary for any comparison of student outcomes across course sections that can be used to establish benchmarks & set enrollment, performance, & retention goals. (Alsete, 1995; Bryant, 2001; Townsend & Dougherty, 2006; Walvoord, 2004)

  25. * explanation for different totals Population of four full-time instructors, across 5 years, 15 courses each (total 60 courses) With student drops and “retakes” present: N = 1820 Student retakes are those students with D, F, W who “retake” the course, sometimes the same student 2-5 times. Each student maintained in the sample only the initial time in the data set. Total sample of two instructors, across 3 years, 15 courses each With drops and retakes: N = 920 Without drops or retakes: N = 836 (no sex or ability scores available for student drops) Note: more students chose to retake the course in the X instructional group and were removed further reducing the enrollment gap between the two groups.

  26. References Abrami, P., & D’Appolonia, S. (1999). Current concerns are past concerns. American Psychologist, 54 (7) 519–20. ACT Institutional Data. (2002). (retrieved from: http://www.act.org/path/policy/pdf/retain_2002.pdf on 1/29/04.) ACT, Inc. (2004). Crisis at the core: Preparing all students for college and work. Iowa City, IA: Author. ACT, Inc. (2006). Reading between the lines: What the ACT reveals about college readiness in reading. Iowa City, IA: Author. Addison, W., Best, J. Warrington, H. (2006). Student’s perceptions of course difficulty and their ratings of the instructor. College Student Journal, 40(2). (Accessed 07/27/06.) Adelman, C. (1992). The way we were: The community college as American thermometer. Washington, DC: U.S. Government Printing Office. Alstete, J. W. (1995). Benchmarking in higher education: Adapting best practices to improve quality. ASHE-ERIC Higher Education Report no. 5. Washington DC: Office of Educational Research and Improvement. Andreoli-Mathie, V., Beins, B., Ludy, T. B., Wing, M., Henderson, B., McAdam, I., & Smith R. (2002) Promoting active learning in psychology courses. In T. McGovern, (Ed.) Handbook for enhancing undergraduate education in psychology. Washington, DC: American Psychological Association. Bain, K. (2004). What the best college teachers do. Cambridge, MA: Harvard University Press. Banta, T. W. (Ed.). (2004). Community college assessment. San Francisco: Jossey-Bass. Barefoot, B., & Gardner, N. (Ed.). (2005). Achieving and sustaining institutional excellence for the first year of college. San Francisco: Jossey-Bass. Barr, R. B. (1998, September/October). Obstacles to implementing the learning paradigm – What it takes to overcome them. About Campus. Bender, B., & Shuh, J. (Eds). (2002, Summer). Using benchmarking to inform practices in higher education. New Directions in Higher Education, 118, San Francisco: Jossey-Bass.

  27. References Bers, T. H., & Calhoun, H. D. (2004, Spring). Literature on community colleges: An overview. New Directions for Community Colleges, 117, p. 5–12, Wiley Publications. Boggs, G. R. (1999). What the learning paradigm means for faculty. AAHE Bulletin, 51 (5). 3-5. Bordens, K., & Abbott, B. (2004). Research design and methods: A process approach, (6th ed.). Boston, MA: McGraw-Hill. Bracey, G. W. (2006). Reading educational research: How to avoid getting statistically snookered. Portsmouth: NH: Heinemann. Bryant, A. N. (2001). Community college students recent finings and trends. Community College Review, 29(3) 77-93. Burke, J. C., & Minassians, H. P. (2004, Summer). Implications of state performance indicators for community college assessment. New Directions For Community Colleges, 126, 53-64. Brookhart, S. M. (1994). Teachers’ grading: Practice and theory. Applied Measurement in Education, 7 (4). Connor-Greene, P. A. (2000). Assessing and promoting student learning: Blurring the line between teaching and testing. Teaching of Psychology, 27 (2). Costa, A. L., & Kallick, B. O. (Eds.). (1995). Assessment in the learning organization: Shifting the paradigm. Alexandria VA: Association for Supervision & Curriculum Development. Darling Hammond, L. (2000, January). Teacher quality and student achievement: A review of state policy evidence. Education Policy Analysis Archives, 8 (1). Davis, T. M., & Hillman Murrell, P. (1993). Turning teaching into learning: The role of student responsibility in the collegiate experience. ASHE-ERIC Higher Education Reports, Report 8. Washington, DC: The George Washington University. Donmoyer, R. (2001). Paradigm talk reconsidered. In V. Richardson (Ed.), Handbook of research on teaching 4th edition. Washington, DC: American Educational Research Association. Dougherty, W., Hilberg, S., & Epaloose, G. (2002). Standards performance continuum: Development and validation of a measure of effective pedagogy. Journal of Educational Research, 96, p. 78.

  28. References Entwistle, N. (2005, March). Learning outcomes and ways of thinking across contrasting disciplines and settings in higher education. The Curriculum Journal, 16 (1). Ewell, P. (1997, December). Organizing for learning: A new imperative. AAHE Bulletin, 3 (6). Figlio, D. N., & Lucas, M. E. (2000). Do high grading standards affect student performance? Cambridge, MA: National Bureau of Economic Research. Friedlander, J., & Serban, A. M. (2004, Summer). Meeting the challenges of assessing student learning outcomes. New Directions for Community Colleges, 126, 101-109. Gillespie, M., & Noble, J. (1992, November). Factors affecting student persistence: A longitudinal study. American college Testing Program, Iowa City, IA. Greenwald, A. G., & Gillmore, G. M. (1997). No pain, no gain? The importance of measuring course workload in student ratings of instruction. Journal of Educational Psychology, 89(4). Harrison, P., Ryan, J., & Moore, P. (1996, December). College students’ self-insight and common implicit theories in ratings of teaching effectiveness. Journal of Educational Psychology, 88(4) 775-782. Harvey, L., & Newton, J. (2004, July). Transforming quality evaluation. Quality in Higher Education, 10 (2). Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Needham Hts., MA: Allyn & Bacon. Johnston, G. H., & Kristovich, S. (2000, Spring). Community college alchemists: Turning data into information. New Directions for Community Colleges, 109, 63 -74. Kember, D. (2004). Interpreting student workload and the factors which shape students’ perceptions of their workload. Studies in Higher Education, 29(2) 165-184. Keppel, G. (1991). Design and analysis: A researcher's handbook. Englewood Cliffs, NJ: Prentice Hall. Keppel, G., Saufley, W. H. Jr., & Tokunaga, H. (1992). H. Introductionto design and analysis: A student’s handbook (2nd ed.). New York: W. H. Freeman and Company.

  29. References Keppler, G. & Zedeck, S. (1989). Data analysis for research designs. Belmont, CA: Worth Publishers.Keppel, Saufley & Tokunaga, 1992; Levine, D., & Lezotte, L. (1990). Universally effective schools: A review and analysis of research and practice. Madison, WS: National Center. Lincoln, Y., & Guba, E. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage Publications. Marsh, H. (1998). Students’ evaluation of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research, 11, 253-388. Marsh, H., & Roche, (2000, March). Effects of grading leniency and low workload on students’ evaluations of teaching: Popular myth, bias, validity, or innocent bystanders? Journal of Educational Psychology, 92 (1), 202-208. Marzano, R.J., Pickering, D. J., & Pollack, J. E. 2001). Classroom instruction that works: Research based strategies for increasing student achievement. Alexandria, VA: McRel Institute. McClenney, K. M. (2006. Summer). Benchmarking effective educational practice. New Directions for Community Colleges, 134, 47-55. McKeachie, W. (2002). McKeachie's teaching tips: Strategies, research, and theory for college and university teachers (11th ed.). New York: Houghton Mifflin. McMillan, J. H. (Ed.). (1998). Assessing students' learning. San Francisco: Jossey-Bass. McMillan, & Wergin (2007). Understanding and evaluating educational research, (3rd ed.) Upper Saddle River: N.J.: Pearson. Morgan, G., Gliner, J., & Harmon, R. (2001). Understanding research methods and statistics: A practitioner’s guide for evaluating research. Mahwah, NJ: Lawrence Erlbaum Associates O’Banion, T. (1997). Creating more learning-centered community colleges. League for Innovation in the Community College. (ERIC report downloaded 07/12/06). Pascarella, E. T., & Terenzini, P. T. (2005). How College Affects Students: Vol. 2, A third decade of research. San Francisco: Jossey-Bass. Popham, W. J. (2005). Classroom assessment: What teachers need to know, 4th ed. Boston, MA: Allyn & Bacon Publishers.

  30. References Ratcliff, J. L., Grace, J. D., Kehoe, J., & Terenzini, P. and Associates. (1996). Realizing the potential: Improving postsecondary teaching, learning, and assessment. Office of Educational Researcher and Improvement. Washington, DC: U. S. Government Printing Office. Rinaldo, V. (2005, October/November). Today’s practitioner is both qualitative and quantitative researcher. The High School Journal, 89. The University of North Carolina Press. Rouseff-Baker, F., & Holm, A. (2004, Summer). Engaging faculty and students in classroom assessment of learning. New Directions For Community Colleges, 126, 29-42. Serban, A. (2004, summer). Assessment of student learning outcomes at the institutional level. New Directions for Community Colleges, 126. Wiley Periodicals Scriven, M. (1995). Student ratings offer useful interpretation to teacher evaluation. Practical Assessment, Research & Evaluation, 4 (7). http://aera.net/pare/ getvn.asp/ Seiler, V., & Seiler, M. (2002, Spring). Professors who make the grade. Review of Business, 23(2), 39. Stiggins, R. J. (2005). Student-involved assessment for learning, 4th ed. Upper Saddle River, NJ: Pearson & Prentice Hall Publishers. Tagg, J. (2003). The learning paradigm college. Williston, VT: Anker Publishing Company, Incorporated. Tinto, V. (1993). Leaving college: Rethinking the causes and cures of student attrition (2nd ed.). Chicago: University of Chicago Press. Townsend, B. K., & Dougherty, K. J. (2006, Winter). Community college missions in the 21st century. New Directions for Community Colleges, 136. Upcraft, M. L., Gardener, J., Barefoot, B., & Associates. (2005). Challenging and supporting the first year student. San Francisco: Jossey-Bass. Valsa,K. (2005). Action research for improving practices: A practical guide. Thousand Oaks, CA: Paul Chapman Publishing. Walvoord, B. E. & Johnson Anderson, V. (1998). Effective grading: A tool for learning and assessment. San Francisco: Jossey-Bass. Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass.

More Related