1 / 15

Sampling...

Sampling. …is the process of collecting a sample from a defined population with the intent that the sample accurately represents the population.

dasan
Télécharger la présentation

Sampling...

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sampling... …is the process of collecting a sample from a defined population with the intent that the sample accurately represents the population. Target population - all the members of the real or hypothetical set of people, events, or objects to which researchers wish to generalize the results of their research. Accessible population - all the individuals who realistically could be included in the sample. BGB, p. 220

  2. Random Sampling • Sampling frame - all the members of the accessible population • Simple random sample - sampling in such a way that all members/units of the accessible population has an equal and independent chance of being selected as a member/unit of the sample

  3. Quant. Sampling Strategies • Systematic sampling - taking every nth from a list • Stratified sampling - sampling so that certain subgroups are adequately represented • Cluster sampling - sampling to include naturally occurring groups • Convenience sampling - sampling at the convenience of the researcher

  4. Purposeful Sampling (Qualitative) • Extreme or deviant case sampling • Intensity sampling • Typical case sampling • Maximum variation sampling • Homogeneous sampling • Critical case sampling BGB, p. 231-237

  5. Purposeful Sampling (Cont.) • Snowball or chain sampling • Criterion sampling • Theory-based or operational construct sampling • Convenience sampling • Opportunitistic sampling

  6. Criteria for Test Selection • Objectivity - whether scores are undistorted by bias of the individuals who administer and score the test. • Standard conditions of administration and scoring - increases objectivity • Normative data - scores are interpreted relative to the performance of other individuals in a defined group; Criterion-referenced interpretation - scores are interpreted relative to some absolute performance standard BGB, p. 247-253

  7. Criteria for Test Selection (cont.) Validity - “The appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores” (APA, AERA, NCME) • Construct validity - the extent to which a particular test can be shown to assess the construct it is purported to measure. • Content validity - the degree to which the scores yielded by a test adequately represent the content, or conceptual domain, that they purport to measure.

  8. Criteria for Test Selection (cont.) Criterion-related Validity: • Predictive validity - the degree to which the predictions made by a test are confirmed by the later behavior (the criterion) of the individuals to whom the test was administered. • Concurrent validity - the extent to which individuals’ scores on a new test correspond to their scores on an established test (the criterion) of the same construct that is administered shortly before or after the new test.

  9. Criteria for Test Selection (cont.) • Consequential validity - The test scores, the theory and beliefs behind the construct, and the language used to label the construct also embody certain values and have value-laden consequences when used to make decisions about individuals. Inferences from test scores and the way we use test scores are valid for particular uses. BGB, p. 252-253

  10. Reliability of Test Scores • Reliability - refers to how much measurement error is present in scores yielded by the test. • Measurement error - the difference between an individual’s true score on a test and the scores that she actually obtains on it over a variety of conditions. (Both the true scores and measurement error are hypothetical.) BGB, p. 253-255

  11. Estimating Reliability • Alternative-form reliability - determines the equivalence of different forms of the same test - coefficient of equivalence • Test-Retest reliability - estimating test score reliability in which the occasion of test administration is examined - coefficient of stability BGB, p. 253-255

  12. Estimating Reliability (cont.) • Internal Consistency - estimating test score reliability in which the individual items of the test are examined - coefficient of internal consistency • Inter-Tester Reliability - estimating test administration errors • Generalizability Theory - provides a way of conceptualizing and assessing the relative contribution of different sources of measurement error to the score that you obtain. BGB, p. 254-257

  13. Types of Performance Tests • Intelligence tests - IQ tests (Stanford-Binet) • Aptitude tests - predict future performance (Metropolitan Readiness) • Achievement tests - measure student’s knowledge, understanding or problem-solving (Stanford) • Diagnostic tests - identify students’ strengths and weaknesses (Diagnostic Math Inventory) BGB, p. 265-266

  14. Performance Assessment • Performance assessment - authentic assessment or alternative assessment - tasks represent complex, complete, real-life tasks • Portfolio - a purposeful collection of a student’s work that records progress in mastering a subject domain and personal reflections • Rubric - criteria and a measurement scale for different proficiency levels BGB, p. 266-267

  15. Locating Tests • Mental Measurement Yearbook • Tests in Print • Computer Databases such as the ERIC/AE Test Locator, including Test Critiques BGB, p. 274-275

More Related