1 / 27

Developing a Questionnaire

Developing a Questionnaire. Chapter 4. Types of Questions. Open-ended high validity, low manipulative quality Closed-ended low validity, high manipulative quality. Open-ended. An open-ended question is one in which you do not provide any standard answers to choose from.

jud
Télécharger la présentation

Developing a Questionnaire

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Developing a Questionnaire Chapter 4

  2. Types of Questions • Open-ended • high validity, low manipulative quality • Closed-ended • low validity, high manipulative quality

  3. Open-ended • An open-ended question is one in which you do not provide any standard answers to choose from. • How old are you? ______ years. • What do you like best about your job?

  4. Closed-ended • A closed-ended question is one in which you provide the response categories, and the respondent just chooses one: What do you like best about your job?(a) The people(b) The diversity of skills you need to do it(c) The pay and/or benefits(d) Other: ______________________________

  5. Dichotomous Questions • Dichotomous Question: a question that has two possible responses • Could be • Yes/No • True/False • Agree/Disagree

  6. Questions based on Level of Measurement • Use a nominal question to measure a variable • Assign a number next to each response that has no meaning; simply a placeholder. • Use an ordinal question to measure a variable • Rank order preferences • More than 5 – 10 items is difficult • Does not measure intensity

  7. Interval Level • Attempt to measure on an intervallevel • Likert response scale: ask an opinion question on a 1-to-5, 1-to-7, etc. bipolar scale • Bipolar: has a neutral point and scale ends are at opposite positions of the opinion • Semantic differential: an object is assessed by the respondent on a set of bipolar adjective pairs • Guttman scale: respondent checks each item with which they agree; constructed as cumulative, so if you agree to one, you probably agree to all of the ones above it in the list

  8. Filter/Contingency Questions • To determine if a respondent is ‘qualified’ to answer questions, might need a filter or contingency question (also known as knowledge) • Limit # of jumps • If only two levels, use graphic to jump • If you can't fit the response to a filter on a single page, it's probably best to be send them to a page, rather than a question #

  9. How many steps in the response scale? • Statistical reliability of the data increases sharply with the number of scale steps up to about 7 steps • After 7, it increases slowly, leveling off around 11 • After 20, it decreases sharply

  10. Should there be a middle category? • Does it make sense to offer it? • Should not be used as the “don’t know or no opinion” option. • The middle option is usually placed between the positive and negative responses. • Sometimes it’s last in an interview.

  11. Direct Magnitude Scaling • Method of obtaining ratio-scaled data • Idea is to give respondents an anchor point, and then ask them to answer questions relative to that • Example: • Suppose you are interested in the severity of crimes. • Begin by assigning a number to one crime and then have respondents assign numbers to the others based upon a ratio.

  12. Filtering "Don't Know" • Standard format • No "don't know" option is presented to the respondent, but is recorded if the respondent volunteers it. • Quasi filter • A "don't know" option is included among the possible responses. • Full filter • First the respondent is asked if they have an opinion. If yes, the question is asked.

  13. Question Placement • It's a good idea to put difficult, embarrassing or threatening questions towards the end • More likely to answer. • If they get mad and quit, at least you've gotten most of your questions asked! • Put related questions together to avoid giving the impression of lack of meticulousness • Watch out for questions that influence the answers to other questions.

  14. Wording of Questions • Direction of Statements • Response bias • Socially desirable • Always and never • Avoid this • Better to phrase as ‘most’, ‘infrequently’ • Language • Reflect educational level and reading ability • Need for various languages

  15. Frequency and Quantity • Consider both frequency and quantity • Consider number of times • Consider duration of times

  16. Mutually Exclusive and Exhaustive • Mutually exclusive: not possible to select more than one category/value • Exhaustive: providing all possible categories/values

  17. Forced Choice • Choose between 2 choices • Might not be relevant • Other choices exist (or at least possible) • Lesser of two evils

  18. Recalling Behavior • Can be difficult to remember • Ask questions that can be answered • Choose time frames that are reasonable • Pilot test for time frame issues

  19. Response Bias • Exaggerating the truth • Socially desirable answers • Consider using ‘trap’ questions • Possibly fictional choice

  20. Sensitive Items • More comfortable answering in categories • Minimize missing data • Might loose statistical power

  21. Evaluating Questions • Pre-testing • Cognitive interviewing • Behavior coding • Peer review • Peer review has shown to be the best method but it’s the least used. 

  22. Validity and Reliability Questions • Evaluative strategies: • Analysis of data to evaluate the strength of predictable relationships among answers and with other characteristics of respondents. • Comparisons of data from alternatively worded questions asked of comparable samples. • Comparison of answers against records. • Measuring the consistency of answers of the same respondents at two points in time.

  23. Coding the Questionnaire • Create a codebook: reference guide for the data set • Code: assigning a value to a response category • Often numeric code • Pre-coding makes it easier • Content analysis on open-ended items • Yes/No often coded as present or not (0 or 1)

  24. Missing Responses • Why blank? • Missed them • Refusal to answer • Didn’t feel it applied • Didn’t know the answer • To code or not • Analyze the difference • If know why, might consider

  25. Piloting the Questionnaire • Test it on yourself • Possibly other experts • Test on people similar to sample • Don’t reuse (some exceptions) • Discuss the survey with individuals • During completion or After

  26. Finding Respondents • Best Methods of Selection • Even with a good survey, poorly chosen sample leads to poor results

More Related