1 / 77

Chapter 13

Chapter 13. Inference About Comparing Two Populations. Comparing Two Populations…. Previously we looked at techniques to estimate and test parameters for one population: Population Mean μ and Population Proportion p

cyma
Télécharger la présentation

Chapter 13

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 13 Inference About Comparing Two Populations

  2. Comparing Two Populations… • Previously we looked at techniques to estimate and test parameters for one population: • Population Mean μ and • Population Proportion p • We will still consider these parameters when we are looking at two populations, however our interest will now be: •  The difference between two means. •  The difference between two proportions.

  3. Inference for Two Population Means Goal: Hypothesis Test or Confidence interval for “m1 – m2” The first decision determines the methods used: Independent samples design – the data values gathered from one sample are unrelated to the data values gathered from the second sample. Dependent samples design (matched/paired) – subjects are paired (matched) so they are as much alike as possible before measurements are made.

  4. Examples of Dependent Samples • Dependent samples design (matched/paired): • subjects are measured twice – apply both treatments to the same subject. • pretest and posttest scores for the same person • reactions to two medicines for the same person • subjects are paired in some way before the experiment is conducted – measure differences between subjects • pair patients based on blood pressure, then measure different reactions (blood pressure reduction) for two types of medicine, one given to each patient in each pair • match children based on socio-economic status then assign to two types of teaching methods. • match students based on ACT Math scores, then measure their performance in MAT 205 for 2 different instructors.

  5. Examples of Independent Samples • Independent samples design: (samples sizes can be different) • Independently select samples from both populations • randomly select 35 males and 35 females. • Measure their blood pressure to see if there is a significant difference based on gender. • - randomly select 40 NKU athletes and 35 non-athletes (not on NKU teams). Measure GPA’s to see if there is a difference. • Take a large sample (say 100 subjects) and randomly assign them to two groups. Again, sample sizes may be different • For testing 2 drugs, randomly assign 50 people to drug A and 50 people to drug B. No matching based on health, income level, etc. is assumed in this case.

  6. Dependent or Independent? A baby-food producer wants to test its product against the competitors. 15 mothers will feed their babies their product, and 15 mothers will feed their babies the competitor’s product. At the end of a month, the weight gain for all the babies was measured. To determine the effect of advertising in the Yellow Pages, a telephone company took a sample of 40 retail stores that did not advertise in the Yellow Pages last year but did so this year. The annual sales for each store in both years were recorded.

  7. Population 1 Sample, size: n1 Statistics: Parameters: Difference of Two Means… • In order to test and estimate the difference between two population means, we draw random samples from each of two populations. Initially, we will consider independent samples, that is, samples that are completely unrelated to one another. • (Likewise, we consider for Population 2)

  8. Difference of Two Means… • In order to test and estimate the difference between two population means, we draw random samples from each of two populations. Initially, we will consider independent samples, that is, samples that are completely unrelated to one another. • Because we are compare two population means, we use the statistic:

  9. Sampling Distribution of • 1. is normally distributed if the original populations are normal –or– approximately normal if the populations are nonnormal and the sample sizes are large (n1, n2 > 30) • 2. The expected value of is • 3. The variance of is • and the standard error is:

  10. Making Inferences About • Since is normally distributed if the original populations are normal –or– approximately normal if the populations are nonnormal and the sample sizes are large (n1, n2 > 30), then: • is a standard normal (or approximately normal) random variable. We could use this to build test statistics or confidence interval estimators for …

  11. Making Inferences About • …except that, in practice, the z statistic is rarely used since the population variances are unknown. • Instead we use a t-statistic. We consider two cases for the unknown population variances: when we believe they are equal and conversely when they are not equal. ??

  12. When are variances equal? • How do we know when the population variances are equal? • Since the population variances are unknown, we can’t know for certain whether they’re equal, but we can examine the sample variances and informally judge their relative values to determine whether we can assume that the population variances are equal or not.

  13. Test Statistic for (equal variances) • Calculate – the pooled variance estimator as… • …and use it here: degrees of freedom

  14. CI Estimator for (equal variances) • The confidence interval estimator for when the population variances are equal is given by: degrees of freedom pooled variance estimator

  15. Test Statistic for (unequal variances) • The test statistic for when the population variances are unequal is given by: • Likewise, the confidence interval estimator is: degrees of freedom

  16. Which case to use? • Which case to use? Equal variance or unequal variance? • Whenever there is insufficient evidence that the variances are unequal, it is preferable to perform the • equal variances t-test. • This is so, because for any two given samples: The number of degrees of freedom for the equal variances case The number of degrees of freedom for the unequal variances case Larger numbers of degrees of freedom have the same effect as having larger sample sizes ≥

  17. Equal or Unequal Variances • We are not going to be calculating these statistics by hand. We will be using technology. • So, if we are asked, we will be using the Equal Variances test.

  18. Example 13.1… • Do people who eat high-fiber cereal for breakfast consume, on average, fewer calories for lunch than people who do not eat high-fiber cereal for breakfast? • What are we trying to show? What is our research hypothesis? • The mean caloric intake of high fiber cereal eaters ( ) • is less than that of non-consumers ( ), i.e. is ?

  19. Example: Making an inference about m1– m2 • Solution: • The data are interval (Quantitative) • The parameter to be tested is • the difference between two means. • The claim to be tested is: • The mean caloric intake of consumers (m1) • is less than that of non-consumers (m2).

  20. Example 13.1… IDENTIFY • The mean caloric intake of high fiber cereal eaters ( ) • is less than that of non-consumers ( ), translates to: • (i.e. ) • Thus, H1: • Hence our null hypothesis becomes: • H0: Phrase H0 & H1 as a “difference of means”

  21. Example 13.1… • A sample of 150 people was randomly drawn. Each person was identified as a consumer or a non-consumer of high-fiber cereal. For each person the number of calories consumed at lunch was recorded. The data: Independent Pop’ns; Either you eat high fiber cereal or you don’t n1+n2=150 Recall H1: There is reason to believe the population variances are unequal…

  22. Hypothesis test results:μ1 : mean of Consumers μ2 : mean of Nonconsumers μ1 - μ2 : mean difference H0 : μ1 - μ2 = 0 HA : μ1 - μ2 < 0 (without pooled variances)

  23. Example 13.1… COMPUTE • Likewise, we can use Excel to do the calculations… Recall H0:

  24. Example 13.1… INTERPRET • …however, we still need to be able to interpret the Excel output: … look at p-value

  25. Example: Making an inference aboutm1– m2 p = 0.019 This is less than α = 0.05 So we can accept the alternative hypothesis There is sufficient evidence to say that the mean caloric intake of consumers (μ1) is less than that of non-consumers (μ2).

  26. Confidence Interval… • Suppose we wanted to compute a 95% confidence interval estimate of the difference between mean caloric intake for consumers and non-consumers of high-fiber cereals… • That is, we estimate that non-consumers of high fiber cereal eat between 1.56 and 56.86 more calories than consumers.

  27. Confidence Interval… • Alternatively, you can use the Estimators workbook… values in bold face are calculated for you…

  28. Confidence Interval 95% confidence interval results:μ1 : mean of Consumers μ2 : mean of Nonconsumers μ1 - μ2 : mean difference (without pooled variances)

  29. Interpreting Confidence Intervals 1. If the confidence interval contains 0, then μ1 could be equal to μ2, or μ1 could be smaller than μ2, orμ1 could be larger than μ2. In other words, we can’t tell. 2. If the confidence interval contains only positive values, then we infer (at the specified level of confidence) that μ1 is larger than μ2. So μ1- μ2 > 0. 3. If the confidence interval contains only negative values, then we infer (at the specified level of confidence) that μ1 is smaller than μ2. So μ1- μ2 < 0.

  30. Interpreting the Confidence Interval Since the confidence interval is 95% CI for difference: (-56.8636, -1.5572) And both of the limits are negative, we can say (at the 95% level of confidence)thatthe mean caloric intake of consumers (μ1) is less than that of non-consumers (μ2). This is, of course, the same conclusion we got from the hypothesis testing.

  31. Example 13.2… IDENTIFY • Two methods are being tested for assembling office chairs. Assembly times are recorded (25 times for each method). At a 5% significance level, do the assembly times for the two methods differ? • That is, H1: • Hence, our null hypothesis becomes: H0: • Reminder: since our null hypothesis is a “not equals” type, it is a two-tailed test.

  32. Example: Making an inference about m1– m2 Assembly times in Minutes • Solution • The data are interval (quantitative) • The parameter of interest is the difference • between two population means. • The claim to be tested is whether a difference • between the two methods exists.

  33. Example 13.2… COMPUTE • The assembly times for each of the two methods are recordedand preliminary data is prepared… The sample variances are similar, hence we will assume that the population variances are equal…

  34. Example 13.2… INTERPRET • Excel, of course, also provides us with the information… …or look at p-value

  35. So does StatCrunch Hypothesis test results:μ1 : mean of Method A μ2 : mean of Method B μ1 - μ2 : mean difference H0 : μ1 - μ2 = 0 HA : μ1 - μ2 ≠ 0 (without pooled variances)

  36. Example: Making an inference about m1– m2 P value = 0.3623 this is more than α = 0.05 So we cannot accept the alternative hypothesis. Conclusion: There is no evidence to infer at the 5% significance level that the two assembly methods are different in terms of assembly time

  37. Confidence Interval… • We can compute a 95% confidence interval estimate for the difference in mean assembly times as: • That is, we estimate the mean difference between the two assembly methods between –.36 and .96 minutes. Note: zero is included in this confidence interval…

  38. Example: Making an inference about m1– m2 StatCrunch gave us: 95% confidence interval results:μ1 : mean of Method A μ2 : mean of Method B μ1 - μ2 : mean difference (with pooled variances)

  39. Example: Making an inference about m1– m2 • This is the same as the manual calculation. • So we can conclude that at the 95% level of confidence, we cannot say that there is any difference in the methods. If there is, then Method B is at most .36 minutes faster than Method A, and Method A is at most .96 minutes faster than Method B.

  40. Terminology… • If all the observations in one sample appear in one column and all the observations of the second sample appear in another column, the data is unstacked. If all the data from both samples is in the same column, the data is said to be stacked.

  41. Matched Pairs Experiment… • Previously when comparing two populations, we examined independent samples. • If, however, an observation in one sample is matched with an observation in a second sample, this is called a matched pairs experiment. • To help understand this concept, let’s consider example 13.4

  42. Matched Pairs Experiment(Dependent samples) • What is a matched pair experiment? • Why matched pairs experiments are needed? • How do we deal with data produced in this way? The following example demonstrates a situation where a matched pair experiment is the correct approach to testing the difference between two population means.

  43. Example 13.4… • Is there a difference between starting salaries offered to MBA grads going into Finance vs. Marketing careers? More precisely, are Finance majors offered higher salaries than Marketing majors? • In this experiment, MBAs are grouped by their GPA into 25 groups. Students from the same group (but with different majors) were selected and their highest salary offer recorded. • Here’s how the data looks…

  44. Example 13.4… • The numbers in black are the original starting salary data; the number in blue were calculated. although a student is either in Finance OR in Marketing (i.e. independent), that the data is grouped in this fashion makes it a matched pairs experiment (i.e. the two students in group #1 are ‘matched’ by their GPA range the difference of the means is equal to the mean of the differences, hence we will consider the “mean of the paired differences” as our parameter of interest:

  45. Example 13.4… IDENTIFY • Do Finance majors have higher salary offers than Marketing majors? • Since: • We want to research this hypothesis: H1: • (and our null hypothesis becomes H0: )

  46. Ex 13.4 Matched Pairs Experiment • Solution • Compare two populations of interval data. • The parameter tested is m1 - m2 m1 The mean of the highest salaryoffered to Finance MBAs • H0: (m1 - m2) = 0 H1: (m1 - m2) > 0 m2 The mean of the highest salaryoffered to Marketing MBAs

  47. Ex 13.4 Matched Pairs Experiment From the data we have: Solution – continued • Let us assume equal variances There is insufficient evidence to concludethat Finance MBAs are offered higher salaries than marketing MBAs.

  48. The effect of a large sample variability • Question • The difference between the sample means is 65624 – 60423 = 5,201. • So, why could we not reject H0 and favor H1 where(m1 – m2 > 0)?

  49. The effect of a large sample variability • Answer: • Sp2 is large (because the sample variances are large) Sp2 = 311,330,926. • A large variance reduces the value of the t statistic and it becomes more difficult to reject H0.

  50. Reducing the variability The range of observations sample A The values each sample consists of might markedly vary... The range of observations sample B

More Related