1 / 15

Experiments

Experiments. Uniquely suited to identify cause-effect relationships To study effect of one variable (treatment) on another (outcome/dependent variable) Use a control group to rule out other causes Program is the “treatment” in a program evaluation; desired outcomes are “effect”

salena
Télécharger la présentation

Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiments • Uniquely suited to identify cause-effect relationships • To study effect of one variable (treatment) on another (outcome/dependent variable) • Use a control group to rule out other causes • Program is the “treatment” in a program evaluation; desired outcomes are “effect” • Measure change with vs without the program, not just before vs after

  2. Uses of Experiments in PRTR • Effects of information or promotion programs • on knowledge, attitudes, or behavior. • Consumer response to marketing mix changes • price, product, promotion, place • Effectiveness of various TR interventions • Impacts of tourism on community/region • community attitudes, social, economic, and environmental impacts. • Benefits/Effects of recreation and tourism activity • physical health, mental health, family bonding, economic impacts, learning, etc.. • Studying preferences • for landscapes and more generally to measure the relative importance of different product attributes in consumer choices. e.g. conjoint analysis

  3. Characteristics of a true Experiment 1. Sample equivalent experimental and control groups 2. Isolate and control the treatment 3. Measure the effect

  4. R MB1 X MA1 Experimental group R MB2 MA2 Control group Pre-test/Post-test with Control R denotes random assignment to groups X denotes the treatment Measure of effect =  Expmt gp -  Control gp = (MA1-MB1) - (MA2-MB2) = with vs without

  5. Example Pre Post Expmt 75% 90% Control 70% 80% Effect = (90-75) - (80-70) = 15% - 10% = 5% With vs without the treatment = 5% Before vs After = 15%

  6. Threats to Internal validity • * Pre-measurement (Testing) : effect of pre-measurement on dependent variable (post-test) • * Selection: nonequivalent experimental & control groups, (statistical regression a special case) • * History: impact of any other events between pre- and post measures on dependent variable • * Interaction: alteration of the “effect” due to interaction between treatment & pre-test. • Maturation:aging of subjects or measurement procedures • Instrumentation:changes in instruments between pre and post. • Mortality: loss of some subjects

  7. Threats to external validity • Reactive error - Hawthorne effect - artificiality of experimental situation • Measurement timing- measure dependent variable at wrong time, miss effect. • Surrogate situation: using population, treatment or situation different from “real” one.

  8. Quasi-experimental designs • Ex post facto (after the fact) • No control group • Subjects self-select to be in expmt group 1. Travel Bureau compares travel inquiries in 1991 and 1994 to evaluate 1992 promotion efforts. 2. To assess effectiveness of an interpretive exhibit, visitors leaving park are asked if they saw exhibit or not, Two groups are compared relative to knowledge, attitudes etc.

  9. Lab vs Field ExperimentsInternal vs External Validity • Internal validity - are findings correct for the particular subjects & setting • External validity - can we generalize results to other similar situations/populations? • Lab Expmt: high internal validity, low external • FieldExpmt: high external validity, low internal

  10. Ad Evaluation -Woodside Example • Design: 30,000 magazine subscribers, randomly assign 10K to each of three groups A, B and C. • Treatments: 2 expmt’l groups, 1 control • A- fun in sun message • B – relax with family message • C – no ad , control group • Measures of effect: • Total inquiries received • Unaided ad recall via phone survey of 3,000 subscribers • Expenditures of predicted visitors from each group (phone survey)

  11. Results

  12. Recommendations • A-B-C Copy Split • Large sample sizes – 1,000 plus • Compare alternatives with each other and to no ad - A to B and A/B to C • Track multiple measures of impact/effect • Gather spending to estimate ROI

  13. Pricing Expmt- Bamford/Manning • Design: Vary campsite pricing for prime campsites at Vermont State • Treatments: Price differentials of $1-$5 • Assign state parks to treatment groups • Measures of effect: • Percent choosing prime sites • Campsite occupancy shift index (compare with previous year) • Revenue generated • Equity – acceptance of policy, income group differences

  14. Pricing Expmt Results • Occupancy shift of 5% for each $1 differential • Pct choosing prime = 54 - .5* Pctage Price Increase • E.g. $ 0 differential – 54% choose prime • 10% differential - 49% ; 20% diff - 44% • Revenue increase of 4 -22% • Small differences in income groups • Pct choose prime 20% for L, 25% M 26% H • Fee Fair? 49% L, 51% M , 60% H

More Related