1 / 16

Empirical Research

Empirical Research. Questions to ask about a study. What is the purpose of the study? About whom and about what topic? What else is known about this topic? What methods were used to collect data? Selection of subjects Representativeness of subjects Survey, interview…

ocean
Télécharger la présentation

Empirical Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Empirical Research

  2. Questions to ask about a study • What is the purpose of the study? • About whom and about what topic? • What else is known about this topic? • What methods were used to collect data? • Selection of subjects • Representativeness of subjects • Survey, interview… • Content, wording, ordering of questions • Are the data reliable? • Possible errors, bias • How were the data analyzed? • Are the conclusions credible? • Are they supported by the data? • Are they consistent with what else is known? If not, are they nevertheless credible?

  3. Pew spam report purpose (p. 6) “In this research, we wanted to look beyond the familiar measures of spam to explore the relationship between Americans and their spam. • What do American Internet users know about spam? • What kind of a burden does spam impose on them? • How do they interact with spam, both preventively and once it arrives in their inboxes? • And finally, how do Americans feel about spam?… We hope our questions and findings will help explain more about how the culture of spam affects people. We especially hope that this new information will provide a sense of realism for the policies, laws, and technology now being crafted to reach the endgame of spam.”

  4. Methods • Quantitative: Measurements of various sorts • e.g., available statistical data from service providers.... • Behavioral studies (e.g., laboratory-based usability studies that measure time to perform tasks) • Questionnaires and surveys • qualitative • Interviews • Diary studies • ethnographic studies (more later) • ‘rapid ethnography’

  5. Pew Data Collection Methods “For this report, we collected original data from two sources. • The first was a national telephone survey of 2,200 adults, including 1,380 Internet users that we conducted during June 2003. • The second was a compilation of more than 4,000 first-person narratives about spam that were solicited since September 2002 by the Telecommunications Research & Action Center (TRAC), a national consumer group.” p. I

  6. Possible Data from Surveys • Facts • Characteristics of respondents • Self-reported behavior • This instance • Generally/usually • Past • Opinions and attitudes: • Preferences, opinions, satisfaction, concerns

  7. Ways of Administering Surveys • Delivering the survey: • In person • Phone • Mail • Paper, in person • Email (usually with a link) • Web • Asking for cooperation • Direct (e.g., telephone) • Indirect (“please click on this link…”)

  8. Methods of sample selection • Random sampling: Each member of the population has an equal and known chance of being selected. • Systematic sampling : every Nth record is selected from a list of population members. • Stratified sampling: The researcher first identifies the relevant stratums (e.g., male/female, manager/subordinate) and their actual representation in the population. A number of subjects large enough for us to be reasonably confident that the stratum represents the population is selected from each stratum • Convenience sampling is used in exploratory research; the sample is selected because they are convenient. • Judgment sampling is a common nonprobability method. The researcher selects the sample based on judgment. This is usually and extension of convenience sampling • Quota sampling is the nonprobability equivalent of stratified sampling. Convenience or judgment sampling is used to select subjects from each stratum. • Snowball sampling relies on referrals from initial subjects to generate additional subjects.

  9. Populations and samples • Unit of analysis: the unit about which info is collected; e.g. household, individual. • Population:the particular collection of units that make up the population. • Target population: to whom one would like to generalize results; e.g., all US Internet users • Operational definition of the population: what constitutes “US”, what constitutes “internet user”? • Sampling Element:. E.g., household, individual. • Pew called households and asked about individuals. • Sampling frame: list of all sampling elements.

  10. A Nation Online: Individualsusing the internet Individuals age 3+ “Is there a computer or laptop in this household?” “Does anyone in this household connect to the Internet from home?” Ditto office. “Other than a computer or laptop, does anyone in this household have some other device with which they can access the Internet, such as: cellular phone or pager a personal digital assistant or handheld device a TV-based Internet device something else/ specify” Sept. 2001: 143,000,000 Nielsen/NetRatings individuals with Internet access from home “Internet usage estimates are based on a sample of households that have access to the Internet and use the following platforms: Windows 95/98/NT, and MacOS 8 or higher” “The Nielsen/NetRatings Internet universe is defined as all members (2 years of age or older) of U.S. households which currently have access to the Internet.” Sept. 2001: 168,600,000 (+18%) Definitions/Operationalization (internet users)

  11. What is spam?

  12. Phone survey sampling method • Page 43 of the report – detailed description. • National telephone survey, with numbers generated to avoid biases introduced by unlisted #s • Multiple attempts, to avoid bias from non-answered calls • Final response rate: 30% - OK, considering their efforts to ensure representativeness • Methods of asking whom to speak to developed to correct for biases in who tends to answer the phone • Results weighted by demographics to correct for differences between sample population and US population

  13. Analysis of data: Confidence intervals (p. 43) • For results based on the total sample (2,200), one can say with 95% confidence that the error attributable to sampling and other random effects is plus or minus 2.2 percentage points. • I.e., if 56% answer a question “yes,” that’s 56% ±2.2, or between 53.8% and 58.2%. • For results based Internet users (n=1,380), the margin of sampling error is plus or minus 2.8 percentage points, • for results based on Email users (n=1272), the margin of error is ±2.9%. • In addition to sampling error, question wording and practical difficulties in conducting telephone surveys may introduce some error or bias into the findings of opinion polls.

  14. Surveys – questionnaire construction • Formulating questions • What do you want to know? • What can people actually tell you? • How will you analyze data? • E.g., if you want to investigate differences x groups, have to ask the appropriate demographic qns • Question types • Open-ended: hard to code; inconsistent answers • Closed-ended: use pretest to come up with complete set of mutually exclusive, unambiguous possibleanswers • Wording of qns (and ans) • Ordering of questions • Pretest, pretest, pretest!

  15. Qualitative methods: the Pew stories give life to the figures “Spam has totally affected my household. My children are limited in their use of the computer due to spam. I cannot open my mail when they are even in the room! The computer has gone from a useful tool for homework and interacting with long-distance family members to a major focus of anger. The adults are upset that there is no way to set up a friendly email for our children without added expense and the children are fighting over the limited time the adults have to monitor the computer usage.”

  16. Other studies’ methods • Ling • observed people using phones in Oslo • Performed informal ‘experiments’ • Palen • Studied 19 new users over 1st 6 weeks of service • Performed repeated interviews • Had users create voice mail phone diaries • Perry • Studied 17 UK mobile workers from a range of professions and with different levels of mobility • Collected data around specific business trips • Relied on interviews, diaries, analysis of artifacts (tech & docs) used during specific business trips

More Related