1 / 27

Participant Observation

Participant Observation. A method of doing field research, or ethnography or participant observation—qualitative research Socialized into the social setting, i.e., going where the action is and simply listening, watching & jotting down notes

adamdaniel
Télécharger la présentation

Participant Observation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Participant Observation • A method of doing field research, or ethnography or participant observation—qualitative research • Socialized into the social setting, i.e., going where the action is and simply listening, watching & jotting down notes • Researcher participates in a role in the field—makes observer comments—subjective view • Field observations are collected, i.e., field notes—objective view

  2. Interview Schedule • An interview is a piece of social interaction with one person asking another a number of questions & the other person giving answers • i.e., qualitative interview is essentially a conversation, e.g., face-to-face interview, focus group, telephone interviews, etc • Types: structured (standardized) and semi-structured • A structured interview schedule is similar to a paper-and-pencil questionnaire—i.e., can be converted into a questionnaire—vice versa

  3. Content Analysis • Is the study of recorded human communications • Examples: newspapers, magazines, web pages, poems, books, songs, paintings, speeches, letters, e-mail messages, laws, constitutions, etc • Any technique—involves making inferences by systematically & objectively identifying special characteristics of messages, i.e., manifest & latent • Manifest, i.e., visible & surface content of communication—intended meaning • Latent, i.e., underlying meaning—unintended—require corroboration

  4. Summary • “Content analysis can be fruitfully employed to examine virtually any type of communication,” (Abrahamson, 1983, p.286). • As a consequence, it can focus on either qualitative or quantitative aspects of communication messages

  5. Reliability vs. Validity in Qualitative Research:

  6. RELIABILITY: • Is the degree to which a test consistently measures whatever it measures • Kirk and Miller (1986),three types: • (i) Quixotic, i.e., single method of observation continually yields unvarying measurement—one observer told to say the same thing--trivial—FBI stories, etc • (ii) Diachronic, i.e., stability of observation over time—weakness: nothing is fixed—things change • (iii) Synchronic: similarity of observations within same time period—most important

  7. solution to problem of reliability: • Carefully reporting methodology used in gathering data • Double-coding as means of checking reliability--(Miles and Huberman,1994) • i.e., two or more researchers coding same field data (inter coder reliability) or • one researcher coding segment of data at two different periods (intra coder reliability)

  8. Calculation of Reliability • Reliability= number of agreements divide by total number of agreements + disagreements • Most desirable range = 90% • Reliability is much easier to assess than validity

  9. VALIDITY: • Is the degree to which a test measures what it is supposed to measure • i.e., to confirm how plausible the data collected— • Kenneth Pike (1969) coined Emic and Etic concepts to explain validity in qualitative research • Emic: studying behavior from inside the system, i.e., local concepts, e.g., family, culture, etc • Etic: studying behavior from outside the system, i.e., pan-cultural concepts, e.g. circumcision of males

  10. Modifying imposed etic to achieve valid emic perspective • Generating emic content of etic construct, i.e., took etic construct & interpreted the emic content, e.g., polygamy, etc., (R. W. Brislin, 1976) • Researcher can use triangulation, i.e., multiple methods of data collection: • Open-ended techniques and • Participant observation

  11. Reliability vs. Validity in Quantitative Research: • Similar to qualitative because all deal with measurement

  12. RELIABILITY: • Means consistency or dependability • Example: a weight-scale—one gets on it & read 150 as the weight— • if one repeats it & gets the same weight each time then the scale is reliable • Focuses also on measurement, or instrumentation— • addressed in a variety of ways: test-retest; equivalent-forms; & split-half

  13. Test-Retest: • Is the degree to which scores are consistent over time • Example: relationship between SAT scores 2005 & 2006, • i.e., administering SAT test to the same group of high school seniors at different times— • yielding same scores--consistently

  14. Equivalent-Forms • Administering two different forms of the same test, e.g., SAT test, to the same group, at the same time • Most acceptable estimate of reliability • Therefore, most commonly used in research

  15. Split-Half • Items on the instrument are divided into comparable halves • E.g., a scale divided so that the first half has the same score as the second • Looks at internal consistency • Weakness: difficulty to ensure that the two halves are equivalent

  16. VALIDITY: • Measuring what you think you are measuring

  17. Content (Face) validity: • Is the degree to which a test measures an intended content area, e.g., achievement tests • Example: to measure knowledge of parenting skills could be obtained by consulting experts such as social workers, parents • Judgment is dependent upon the knowledge of the experts

  18. Criterion validity: • Describes the extent to which a correlation exists between the measuring instrument & another standard—empirical evidence • E.g., the relationship between college board examination and student academic success in college • Two measures need to be taken: the measure of the test itself & the criterion to which the test is related • E.g., a program to help pregnant teenagers succeed in high school and a criterion such as SAT scores as a comparison

  19. Construct validity: • Is the degree to which a test measures an intended hypothetical construct • i.e., a non-observable trait, such as intelligence, which explains behavior • Involves testing hypothesis—deductive • Most difficult to establish

  20. Difference between reliability and validity • Reliability: the degree to which a measurement procedure produces similar outcomes when it is repeated. • E.g., gender, birthplace, mother’s name—should be the same always— • Validity: tests for determining whether a measure is measuring the concept that the researcher thinks is being measured, • i.e., “Am I measuring what I think I am measuring”?

  21. Note: • a valid test is always reliable but a reliable test is not necessarily valid • e.g., measure concepts--positivism instead measuring nouns—invalid • Reliability is much easier to assess than validity.

More Related