1 / 25

Construct Validity and Measurement

Construct Validity and Measurement. Do they measure what they say they measure?. Construct Validity. What you think. What you think. Cause Construct. Effect Construct. Theory. operationalization. Program. Observations. Observation. What you do. What you see. Construct Validity.

hinto
Télécharger la présentation

Construct Validity and Measurement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Construct Validity and Measurement Do they measure what they say they measure?

  2. Construct Validity What you think What you think Cause Construct Effect Construct Theory operationalization Program Observations Observation What you do What you see

  3. Construct Validity What you think What you think Cause Construct Effect Construct Theory Can we generalize back to the constructs? Program Observations Observation What you do What you see

  4. Construct Validity • the degree to which inferences can legitimately be made from the operationalizations in a study to the theoretical constructs on which they are based or • Did the study really measure what it claimed to measure?

  5. Central Questions • “Is your operationalization an accurate translation of the construct?” • “Are you measuring what you intended to measure?” • “Does your program/treatment accurately reflect what you intended?”

  6. Construct Validity • translation validity • face validity • content validity • criterion-related validity • predictive validity • concurrent validity • convergent validity • discriminant validity • Important Point • all aspects or elements of construct validity • think of these as things to look for when evaluating construct validity

  7. Translation Validity vs. Criterion-Related Validity • translation validity… • assesses whether the operationalization is a good reflection of the construct • criterion validity… • assesses whether the operationalization behaves in the way it should given your theory of the construct (uses other measures (criteria) to assess construct validity)

  8. Translation Validity • Focuses on whether the operationalization is a good reflection of the construct • Face Validity: On its face, does the operationalization look like a good translation of the construct?

  9. Translation Validity: Face validity Construct • satisfaction with KNR 164 course Possible Operationalizations (ways to measure construct) Question: Which of these are more or less reasonable on the face of it?

  10. Translation Validity: Face validity Construct • Fitness Possible Operationalizations (ways to measure construct) Question: Which of these are more or less reasonable on the face of it?

  11. Translation Validity: Content Validity • Content Validity: Operationalization is checked against the relevant content domain for the construct • Often involves researching just how the construct is defined by those in a position to know (experts)

  12. Translation Validity: Content Validity Construct • fitness program Possible Operationalization (key elements?) Question: Are these the correct elements of the construct?

  13. Criterion Validity • The performance of your operationalization (i.e., measure) is checked against a criterion. • A prediction of how the operationalization will perform on some other measure based on your theory or construct • “Validating” a measure based on its relationship to another measure

  14. Criterion Validity • Predictive Validity: Operationalization’s ability to predict something it should theoretically be able to predict • e.g.: GRE and grad school performance (!) • Concurrent Validity: Operationalization’s ability to distinguish between groups which theoretically should be different • e.g.: fatness test for athletes and non-athletes

  15. Criterion Validity • Convergent Validity: Degree to which the operationalization is similar to (converges on) other operationalizations to which it theoretically should be similar • e.g.: • Discriminant Validity: Degree to which the operationalization is not similar to other operationalizations to which it theoretically should not be similar • e.g.:

  16. Threats to Construct Validity • inadequate preoperational explication • mono-operation bias • mono-method bias • interaction of different treatments • interaction of testing and treatment • restricted generalizability across constructs • confounding constructs and levels • social threats to construct validity

  17. Examples of Threats to Construct Validity • construct not defined clearly enough • only one possible example of the construct (either IV or DV)

  18. Examples of Threats to Construct Validity • inaccurate labeling of construct • missing important elements • failure to define or consider “dose” • social issues • participants guessing what they are “supposed” to do or say • participants being apprehension • experimenter’s expectancies biasing observations being made

  19. Below is a research problem. Identify which of the threats to construct validity may be of major concern. General idea behind the research scenario (a quotation from our researcher): “I feel that plyometric strength training is more effective for gaining strength than isometric strength training. I’ve done plyometrics for years, and it has worked wonders.” An undergrad class taught by the researcher is split into 3 groups of 30. One third is assigned to a plyometric strength-training program, 1/3 to an isometric program, and 1/3 do nothing. Before assigning them, the researcher makes sure to tell the entire class about the purpose of the research, and explains we are doing it to see if the researcher’s suspicions about plyometrics are correct. Before and at the end of the programs, all students are tested on a measure of strength - a grip dynamometer. This test is supervised by the researcher to make sure proper procedures are followed. It is expected that the plyometric group will make the greater strength gains.

  20. Guiding Questions Construct Validity –Measures/Observations • What, in theory, are the researchers trying to assess or measure (list each construct) Answer the following questions for each construct included in the study. • Do the researchers explicitly define the construct? If so, how? • If the construct is not explicitly defined by the authors, what does the construct mean to you (in theory)? • How did the researchers operationalize the construct? That is, how exactly did they measure/assess the construct? • In your opinion, is the operationalization of the construct a reasonable approximation of the theoretical construct? In other words, does the measure they used in the study match up with what they said they were trying to measures? [This is the key Construct Validity question] • Do you see any limitation with their operationalization (i.e., are any of the common threats an issue)? • Do the researchers provide or present any evidence of content validity or criterion-related validity?

  21. Guiding Questions Construct Validity – Interventions/Treatments • What, in theory, are the researchers trying to test as their intervention or treatment? • Do the researchers provide some idea of what their intervention/treatment should look like in theory? If not, what should the intervention/treatment be to you (in theory)? • How did the researchers operationalize the intervention/treatment? That is, what exactly did they have the subjects or groups of subjects do? • In your opinion, is the operationalization of the intervention/treatment a reasonable approximation of what the research wanted to test in theory? In other words, does the intervention/treatment they used in the study match up with what they said they were trying to do? [This is the key Construct Validity question] • Do you see any limitation with their operationalization (i.e., are any of the common threats an issue)?

  22. Overall (final) question in each case: • Do the limitations to construct validity: • change the meaning of the study’s conclusions or • have the potential to alter the results of the study.

  23. Practice Activity #1: Evaluate the construct validity of this study General idea behind the research scenario (a quotation from our researcher): “I feel that plyometric strength training is more effective for gaining strength than isometric strength training. I’ve done plyometrics for years, and it has worked wonders.” An undergrad class taught by the researcher is split into 3 groups of 30. One third is assigned to a plyometric strength-training program, 1/3 to an isometric program, and 1/3 do nothing. Before assigning them, the researcher makes sure to tell the entire class about the purpose of the research, and explains we are doing it to see if the researcher’s suspicions about plyometrics are correct. Before and at the end of the programs, all students perform a 1RM leg extension test as a measure of strength. This test is supervised by the researcher to make sure proper procedures are followed. During the 4 weeks of training, the subjects in the plyometric group did 10 drop jumps each day from a height of 2 feet, while the subjects in the isometric group performed 3 sets of 10 reps of the following exercises using Nautilus equipment: bench press, shoulder press, and biceps curls. Those in the control group were instructed to do no physical activity during the 4 weeks. It was expected that the plyometric group will make the greatest strength gains.

  24. Practice Activity #2 • Use the guiding questions to evaluate the construct validity of measures used in the study distributed in class

More Related