1 / 58

Survey sampling

Survey sampling. T he foundation of good research design. Introduction. In 2009 the Market Research Society of NZ launched an ongoing series of professional development workshops.

Télécharger la présentation

Survey sampling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Survey sampling The foundation of good research design

  2. Introduction • In 2009 the Market Research Society of NZ launched an ongoing series of professional development workshops. • The objective is to lift or refresh the skills of researchers and the demonstrate professional leadership by the research community – freely (or at least inexpensively) sharing our skills and experience with users of research. • The underlying philosophy is to encourage best practice. • Today the subject of the workshop is decidedly unsexy. • It will not equip you with gob-smackingly amazing tales with which to regale your dinner party friends. (“...and then I said, ‘mate – your quota sampling is skewed!’) • It will however re-awaken you to an issue that has drifted away from centre stage of research thinking; an issue that is at the bedrock of reliable, risk-minimised research. • In short – this session will help you achieve more robust, more trustworthy numbers, conclusions and – ultimately, decisions.

  3. Who this is for • Research clients – end users who rely on research/survey data to make reliable decisions • Market researchers who want to trust their own data • Fieldwork managers who are asked to provide good samples

  4. The research iceberg Most researchers think about the data, presentation and results. Beneath the iceberg are questions are about sampling and fieldwork design.

  5. CATI In California, 70% of phone calls are met with an answer message.

  6. DOOR TO DOOR? Fieldwork isn’t always easy.

  7. ONLINE SURVEY? While 70%+ of NZers now have online access, who does an online survey systematically exclude? In California, 70% of phone calls are met with an answer message.

  8. ANY SURVEY... Meanwhile, which young people are at home doing your liquor survey?

  9. Fieldwork is the unsung hero of Market & Social Research • Every time we specify a survey of 500 people, or 1000 voters, or 750 users of Brand X...we’re hopefully applying careful thought to how we reach them, and ensuring they represent the population we’re reporting on. • Fieldwork is a challenging component of every survey we do. • Statistical theory meets practical realities. • Demand for accuracy meets finite budget. • Our agenda collides with the respondents’ busy agendas.

  10. Two examples of sampling error problems WHY SAMPLING MATTERS

  11. First a couple of examples of why this is important • A recent pitch I went to. The client wanted to monitor his market, and he didn’t have a very big budget. • I suggested that an online approach might best suit his needs, but he reacted very negatively. • “We tried that. According to the survey we had 19% market share one month. Next month the survey said we had 9% share. Yet we know from our own warehouse that we kept shipping the same amount.” • So I asked him whether other things in the survey results had jumped around also, from month to month. • “Yes,” he said. “One month 60% of the respondents were female, next month it was more like 50/50.”

  12. The Literary Digest poll of 1936 • 2 million respondents! • Predicted a landslide for Alf Landon • Instead, Roosevelt won by a massive majority. • Literary Digest discredited and goes out of business. • Their mistake: a belief that a big enough sample ought to be enough. • Sampling error: skewed to magazine subscribers and phone owners and self-selectors.

  13. Sample size • The belief in huge sample sizes was largely smashed in the 1936 debacle. (Two million respondents!) Yet it still happens: in 2008 an AOL poll ‘predicted’ a landslide for McCain over Obama in all 50 states.) • Statistical theory says that true random sampling, if conducted properly, can give you confident results using much smaller samples. You don’t need huge numbers – you need good sampling design.

  14. So lesson one. • A key building block of trustworthy quant research is the way we choose our survey samples. • It isn’t just the size that counts – it’s about representativeness. • The first step for a researcher, is to have a reasonable running knowledge of the population itself. If you do nationwide surveys, then it helps to understand the nature of NZ’s population.

  15. By getting to know some basic numbers you get a better eye for the details Profile of New zealand’s population

  16. Pop Quiz. • What proportion of New Zealanders live in the South Island? 25% 27% 30% • If we looked just at 18-24s in NZ, what percentage are Maori, what percentage are Pasifika, what percentage are Asian? • True or false? There’s a man drought amongst NZers aged 30-49. • 15 year age bands. Are there more New Zealanders aged 15-29 than there are 30-44? • How many people in the average NZ HH? 2.4 or 2.7 or 2.9 or 3.2 • Answer. 24.7% • Among 18-24s in NZ, 16%are Maori, some 8% are Pasifika, and 17% are Asian. • True. 52% of this age band are females, and they outnumber men of this age band by 46,000. • Battle of the bands. Yes – more in the young band, but it’s closer than you think. 908,000 15-29s versus 889,000 30-44s • Average HH number: 2.7

  17. Sample Calculator You can see how many respondents you need in each age/gender box. Select your sample size, your age range and check your margin of error.

  18. Why this is useful to know. • A running knowledge of the population you work with gives you a head-start knowledge when you design your research. • For example all those young Asians – how do you get 17% of your youth sample to fulfill the Asian quota – when generally you might find around 5-6% are willing to take part? • Are you designing your survey to reflect NZ households? Or to reflect NZ consumers? Or just the Wellington market? • So advice: You need a running knowledge of the people you’re going to listen to.

  19. You want to be representative, right? NOW – HOW WILL YOU DESIGN YOUR SAMPLING?

  20. Your three basic considerations • Who am I actually needing to listen to? • How do I achieve a representative sample? • How many respondents do I need to ensure I have reliable data?

  21. 1. Who are you intending to listen to? • First of all you need to be extremely clear about whose attitudes you’re trying to measure. • For example if you conducted a political poll, are you wanting the opinions of: • Everyone aged 18+? 100% of the adult population. • Everyone eligible to vote (18+ and enrolled to vote)? Maybe 90% of the adult population. • Everyone who intends to vote? (18+, intends to be enrolled AND vote)? Maybe 60-70% of the adult population.

  22. 2. How then do you get a representative sample of them? • How do you reach them? • How do you ensure that you’re get a representative sample from that universe you’ve chosen? • You need to consider: • Randomness: an equal chance of selecting any member of the population ("probability sampling"). Or at least: • Representativeness: they may not be randomly selected, (think consumer panel) but they ARE representative. • Control through External selection: respondents are chosen to participate rather than deciding themselves to take the survey. (As opposed to those online media surveys where respondents self-select.) • Well talk about how many later on.

  23. How to reach them – The main three. On-line Door to Door CATI Quick to deploy. You can quota-sample to a rigorous extent – for example (Age/Gender/Region/Income). Can manage call-backs in order to include “hard to reach” respondents in sample frame. Around 5 - 8% of HHs are excluded. • Quick to deploy. • You can quota-sample to a limited extent. • Sampling is not random. • There are question marks about the representativeness of the panel, and the process of reaching panellists. • Are call-backs employed? • Around 25% of HHs are excluded from this universe. • Slower or more expensive to deploy. • Allows for (though depending on your design) an extremely rigorous scientific random sampling. (Random person in a random house on a random street in a random area). • In theory no HHs are excluded.

  24. How to reach them. Other options. Invitation Central Location Intercepts Think shopping-mall intercept. Or info-booth. Requires two step-design. First choose the right mix of shopping centres/locations. Then a fair and systematic method of recruiting respondents from these locations. • Quick or economic to deploy. • Online – for example run via websites – for example NZ Herald online polls. • May be part of a two-stage system (think in-flight surveys where certain passengers are invited to go online.) • Or postal surveys. • So these may have a lesser or greater degree of self-selection bias. • Face to face interviewing. • Usually requires a two step process – recruitment. • Can be very convenient – though it generally has a bias (geographic, or people who can’t make the time.)

  25. Your survey might mix and match these methods. • The issue is not just about which medium you choose (phone, face to face, online, paper-based etc) but the sampling design underneath these media. • So let’s look at five different types of sampling design.

  26. 1. Simple random sampling • A selection method which gives everyone an equal chance of being selected. • And with enough numbers to even out the statistically expected aberrations. Pull 20 marbles from a jar where 50% are white and 50% are green, and you can expect “around 10 of each.” maybe 12 greens and 8 whites etc. The more marbles you select, the closer you’ll get to 50/50.

  27. 2. Systematic Random sampling • Usually we think of random sampling as being about random phone numbers or addresses. But we may employ other systematic methods according to the survey: • Every 20th pedestrian who walks past us. • Every 10th customer of our hotel. • Every person on a database beginning with odd letters of the alphabet. • The people sitting in grandstand seats pre-chosen before the big game. • The objective is to systematically choose people in order to minimise selection bias.

  28. 3. Stratified Sampling (Quota or Proportional Sampling) • Here we split our target number of respondents into strata – usually based on age and gender (but they can be more complex than this.) • Then sampling takes place – and different approaches might be taken for each strata. For example you might need a different methodology to reach those hard to get 18-24s, for example. (More call backs for this group perhaps.)

  29. 4. Cluster Sampling • Two stage sampling method. • For example imagine a business survey. Your company supplies 600 retailers. Within each retailer you deal with 10 people. • Cluster sampling would start by selecting only some of the retailers. Maybe one in every five, or maybe through some stratified system (x number from the South Island, or stratified by size.) • Having selected these target retailers – then contact three or four of their 10 contact staff at random. • In essence, panels operate on this principle. The question is, how well does the initial panel creation reflect the universe?

  30. 5. Multi-stage sampling • This may involve a cascade of different sampling methods. • On the right is the flow of a social research project where we needed to listen to a spectrum of members of the Pasifika community in South Auckland. • The last stage involved reaching individuals whose lives were outside the main church/community groups. • The point is, you can be quite creative, but you always need to be thoughtful.

  31. Ask yourself. • Who are we trying to listen to? • Where is the best place to reach them – the best medium, time or place? • Who are we systematically excluding? And how to we get their voice as well?

  32. Do we just contact people until we get our numbers? Or should we call people back? CALL-BACKS. WHY THEY MAKE A DIFFERENCE

  33. Not all people are equally easy to reach. • The chart on the right shows call data from a phone poll of those aged 18+. • Males were just slightly harder to get hold of than females. • But by age, the story shows dramatic differences.

  34. So if you called people just once this is what you’d get. Under-representing young people. Over-representing old.

  35. So who is hardest to reach? Measured in average number of calls.

  36. So on a One Call only basis you’d be systematically under-representing by around 30% - these groups: • Pasifika peoples • A family with mainly school aged children • Hamiltonians • Those 18-29s • Wellingtonians • Young couples with no children Let’s try this a second week. Last week we needed 1.65 calls per respondent.

  37. This week we needed 1.95 calls. Aucklanders were busy...

  38. So a one call strategy in week 2 would have been worse (for under 30s) • These numbers vary from season to season, so if you’re wanting a representative sample of New Zealanders you may a “contact them once” strategy will produce highly variable results. • Here – on a second wave of the poll, the hit rate for 18-29s was even lower.

  39. Now even this disguises issues. • As we can see, among the hard-to-reach people are the young, no kids people. • If we set a quota sample and kept phoning (a new number each time) until we found our required 22% of respondents in this age group – how would you describe the people you’ve found? • Socially active? Go to the cinema a lot? Love pubs and clubs? • OR • Tend to stay at home. Less discretionary income. • Things like this make a big difference if you’re doing a beer or spirits or entertainment survey, for example.

  40. Now hand on heart. Who thinks about this? • When you specify the fieldwork for a survey – do you specify how many call-backs, how many reminders? • This is actually one of the hidden stories in the sampling iceberg.

  41. How much do they matter? How do we measure these? RESPONSE RATES –HOW THEY ARE MEASURED

  42. The response rate • When a client – typically a Government client – insists on, say a 70% response rate, or a 50% response rate, what are they actually asking for? • What impact will this have on your survey sampling design? • How can we lift response rates? • How much does this matter? • When a client asks: what was the response rate in this survey – do we know the answer?

  43. There’s actually some debate about this. • Very serious studies in the USA have queried whether a 50% response rate delivers results significantly different from a survey in which a 20% response rate is achieved. • Yet the intuitive argument seems to apply: the higher the response rate we achieve, the more we can be certain that we are not systematically missing – say – busy people, or hard to reach people.

  44. The response rate • The response rate formula varies somewhat. • Fundamentally it is about the number of satisfactorily completed surveys per 100 calls. • The example on the right is from an actual CATI survey, a poll in NZ. Of made 1000 calls – 44% of those contacted fully took part in the survey.

  45. Here’s the log from an ongoing series of CATI surveys where 1000 respondents were required.

  46. Some definitions

  47. Summary • Establishing a good response rate is one pivotal means of having data you can rely on – though the importance of this measure has been challenged. • There is not an ironclad guarantee that a survey with a 40% response rate is significantly more reliable than a survey with a 20% response rate. • However three benefits: • The ability to identify ‘hard to reach’ segments of the population and adjust sampling strategy in future. • A general reliability/confidence indicator. Is a low response rate creating a systematic bias? • The capacity to cost your fieldwork a lot more accurately.

  48. How many is enough? How confident can we be? SAMPLE SIZE, Margin of error – statistical confidence

  49. Sample size • You don’t need huge numbers. 2 million! • Statistical theory says that true random probability sampling, if conducted properly, can give you confident results using much smaller samples. You don’t need huge numbers – you need good sampling design. • So how many do we need? • What exactly do we mean by Margin of Error? • The two questions are fully related.

  50. Margin of Error • If from a bucket of 1000 black and white marbles (500 of each) you pulled out 200 marbles at random – how many black marbles might you expect? 50%? • The answer is: around about 50%. You might get more, you might get less. • In this case the margin of error is ±6.2%. In other words 95% of the time between 44% and 56% of your marbles would be black. • So our margin of error depends on 4 things. • Size of sample universe. 1000 • Quality of random sampling (were you picky??) • Size of sample. 200. • Level of confidence. 95% is the standard.

More Related