1 / 16

Taking Stock Of Measurement

Taking Stock Of Measurement. Basics Of Measurement. Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables: Abstract ideas that form the basis for research designs.

gaston
Télécharger la présentation

Taking Stock Of Measurement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Taking Stock Of Measurement

  2. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables: Abstract ideas that form the basis for research designs. Measured variables: Numbers that represent conceptual variables that are used in data analysis. Operation Definition: A statement indicating how conceptual variables are transformed into measured variables.

  3. Converging Operations: From Rome With Love Using more that one measurement or research approach to study a given topic, with the hope that all of the approaches yield similar results.

  4. Equestrian Statue of Marcus Aurelius: A Lesson In Multidimensional Sight

  5. Scales Of Measurement Nominal: A variable that names or identifies a particular characteristic (e.g., sex, religion). Ordinal: A measured variable in which the numbers indicate whether there is more or less of a conceptual variable but do not indicate the exact distance between individuals on the conceptual variable (e.g., IQ scores, grade orientation) Many conceptual variables studied by psychologists are nominal or ordinal.

  6. Scales Of Measurement Interval: A measured variable in which equal changes in the measured variable are known to correspond to equal changes in the conceptual variable (e.g., Fahrenheit temperature) . Ratio: Scales in which there is a zero point representing a complete absence of the conceptual variable (e.g., Kelvin temperature, weight, assembly rates ).

  7. Two Types Of Error Random Error: Chance fluctuations in measurement that influence scores on a conceptual variable. These tend to be self-canceling (e.g., coding errors, mood). Systematic Error: The impact of other conceptual variables on the measured variable. These tend not to be self-canceling. For example, the tendency to present oneself in the best possible light could impact measures of attitude. Reliability: The extent to which a measure is free of random error.

  8. Test-Retest Reliability And Equivalent Forms Test-Retest: The extent to which scores on a variable correlate between test sessions. Measures the stability of scores over time. Retesting Effects: Experience with taking the test make affect responses on the second test session. This can affect test-retest reliability. Equivalent Forms: Two parallel forms of the same test are administered. This is one way to overcome retesting effects.

  9. Test-Retest Reliability: Traits and States States: Personality variables that are expected to change over time (e.g., mood). Test-retest reliability is not suitable for measuring state s. Traits: Personality variables that are expected to be relatively stable over time. Test-retest reliability may be used with traits.

  10. Basic Correlation Facts 1. Correlation coefficients range from -1.0 to +1.0. 2. The sign indicates the direction of the correlation. 3. With positive correlations, as Variable X increases so does Variable Y. 4. With negative correlations, as Variable X increases Variable Y decreases. 5. The number indicates the magnitude or strength of the correlation. Thus, a correlation of -.5 is greater than a correlation of +.3.

  11. Internal Consistency Internal consistency: An index of reliability referring to the degree that items measuring the same construct correlate with one another. Split-half reliability: The correlation of odd and even numbered items within a scale. Cronbach’s coefficient alpha: Average correlation of items within a scale. Interrater reliability: The internal consistency of a group of judges.

  12. Validity Validity: The degree that a test measures what it purports to measure. Construct validity: The extent to which a measured variable measure the conceptual variable. Criterion validity: An assessment of the extent that a self-report measure correlates with a behavioral measure.

  13. Construct Validity Face validity: The degree that a measured variable appears to measure a conceptual variable. Content validity: The extent to which a measured variable appears to cover the full domain of the conceptual variable. Convergent validity: The extent to which a measured variable is correlated with other measures designed to assess the same or related constructs. Discriminant validity: The extent to which a measured variable is unrelated to other variables designed to measure other conceptual variables.

  14. Criterion Validity When validity is assessed by correlating a self-report with a behavioral measure, the behavioral measure is called the criterion. Predictive validity: A self-report measure is with a behavior recorded in the future. Criterion validity: A self-report measure is correlated with a behavior recorded at approximately the same time as the self-report index.

  15. Improving Reliability And Validity 1. Pilot test: Pre-test your measure on a small group of participants. 2. Use multiple measures: Reliability tends to increase as a function of the number of test items. 3. Employ measures with substantial variability: Avoid measures where almost everyone gives the same answer. 4. Avoid ambiguity in the meaning of words and “double-barreled” items.

  16. Improving Reliability And Validity 5. Use instructions to get participants to respond seriously. Frequently, embed items to catch persons who are not taking the test seriously. 6. Try to make your measures nonreactive. 7. Insure that you cover a large section of the domain of interest. 8. When possible use existing measures and do not create your own tests or indices.

More Related