1 / 64

Chapter 13

Chapter 13. Inference About Comparing Two Populations. Comparing Two Populations…. Previously we looked at techniques to estimate and test parameters for one population: Population Mean , Population Variance , and Population Proportion p

Télécharger la présentation

Chapter 13

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 13 Inference About Comparing Two Populations

  2. Comparing Two Populations… • Previously we looked at techniques to estimate and test parameters for one population: • Population Mean , Population Variance , and • Population Proportion p • We will still consider these parameters when we are looking at two populations, however our interest will now be: •  The difference between two means. •  The ratio of two variances. •  The difference between two proportions.

  3. Population 1 Sample, size: n1 Statistics: Parameters: Difference of Two Means… • In order to test and estimate the difference between two population means, we draw random samples from each of two populations. Initially, we will consider independent samples, that is, samples that are completely unrelated to one another. • (Likewise, we consider for Population 2)

  4. Difference of Two Means… • In order to test and estimate the difference between two population means, we draw random samples from each of two populations. Initially, we will consider independent samples, that is, samples that are completely unrelated to one another. • Because we are compare two population means, we use the statistic:

  5. Sampling Distribution of • 1. is normally distributed if the original populations are normal –or– approximately normal if the populations are nonnormal and the sample sizes are large (n1, n2 > 30) • 2. The expected value of is • 3. The variance of is • and the standard error is:

  6. Making Inferences About • Since is normally distributed if the original populations are normal –or– approximately normal if the populations are nonnormal and the sample sizes are large (n1, n2 > 30), then: • is a standard normal (or approximately normal) random variable. We could use this to build test statistics or confidence interval estimators for …

  7. Making Inferences About • …except that, in practice, the z statistic is rarely used since the population variances are unknown. • Instead we use a t-statistic. We consider two cases for the unknown population variances: when we believe they are equal and conversely when they are not equal. ??

  8. When are variances equal? • How do we know when the population variances are equal? • Since the population variances are unknown, we can’t know for certain whether they’re equal, but we can examine the sample variances and informally judge their relative values to determine whether we can assume that the population variances are equal or not.

  9. Test Statistic for (equal variances) • Calculate – the pooled variance estimator as… • …and use it here: degrees of freedom

  10. CI Estimator for (equal variances) • The confidence interval estimator for when the population variances are equal is given by: degrees of freedom pooled variance estimator

  11. Test Statistic for (unequal variances) • The test statistic for when the population variances are unequal is given by: • Likewise, the confidence interval estimator is: degrees of freedom

  12. Which case to use? • Which case to use? Equal variance or unequal variance? • Whenever there is insufficient evidence that the variances are unequal, it is preferable to perform the • equal variances t-test. • This is so, because for any two given samples: The number of degrees of freedom for the equal variances case The number of degrees of freedom for the unequal variances case Larger numbers of degrees of freedom have the same effect as having larger sample sizes ≥

  13. Example 13.1… • Do people who eat high-fiber cereal for breakfast consume, on average, fewer calories for lunch than people who do not eat high-fiber cereal for breakfast? • What are we trying to show? What is our research hypothesis? • The mean caloric intake of high fiber cereal eaters ( ) • is less than that of non-consumers ( ), i.e. is ?

  14. Example 13.1… IDENTIFY • The mean caloric intake of high fiber cereal eaters ( ) • is less than that of non-consumers ( ), translates to: • (i.e. ) • Thus, H1: • Hence our null hypothesis becomes: • H0: Phrase H0 & H1 as a “difference of means”

  15. Example 13.1… • A sample of 150 people was randomly drawn. Each person was identified as a consumer or a non-consumer of high-fiber cereal. For each person the number of calories consumed at lunch was recorded. The data: Independent Pop’ns; Either you eat high fiber cereal or you don’t n1+n2=150 Recall H1: There is reason to believe the population variances are unequal…

  16. Example 13.1… COMPUTE • Thus, our test statistic is: • The number of degrees of freedom is: • Hence the rejection region is…

  17. Example 13.1… COMPUTE • Our rejection region: • Our test statistic: • Since our test statistic (-2.09) is less than our critical value of t (-1.658), we reject H0 in favor of H1 — that is, there is sufficient evidence to support the claim that high fiber cereal eaters consume less calories at lunch. Compare

  18. Example 13.1… COMPUTE • Likewise, we can use Excel to do the calculations… Recall H0:

  19. Example 13.1… INTERPRET • …however, we still need to be able to interpret the Excel output: Compare… …or look at p-value Beware! Excel gives a right tail critical value! i.e. 1.6573 vs. –1.6573 !!

  20. Confidence Interval… • Suppose we wanted to compute a 95% confidence interval estimate of the difference between mean caloric intake for consumers and non-consumers of high-fiber cereals… • That is, we estimate that non-consumers of high fiber cereal eat between 1.56 and 56.86 more calories than consumers.

  21. Confidence Interval… • Alternatively, you can use the Estimators workbook… values in bold face are calculated for you…

  22. Example 13.2… IDENTIFY • Two methods are being tested for assembling office chairs. Assembly times are recorded (25 times for each method). At a 5% significance level, do the assembly times for the two methods differ? • That is, H1: • Hence, our null hypothesis becomes: H0: • Reminder: since our null hypothesis is a “not equals” type, it is a two-tailed test.

  23. Example 13.2… COMPUTE • The assembly times for each of the two methods are recordedand preliminary data is prepared… The sample variances are similar, hence we will assume that the population variances are equal…

  24. Example 13.2… COMPUTE • Recall, we are doing a two-tailed test, hence the rejection region will be: • The number of degrees of freedom is: • Hence our critical values of t (and our rejection region) becomes:

  25. Example 13.2… COMPUTE • In order to calculate our t-statistic, we need to first calculate the pooled variance estimator, followed by the t-statistic…

  26. Example 13.2… INTERPRET • Since our calculated t-statistic does not fall into the rejection region, we cannot reject H0 in favor of H1, that is, there is not sufficient evidence to infer that the mean assembly times differ.

  27. Example 13.2… INTERPRET • Excel, of course, also provides us with the information… Compare… …or look at p-value

  28. Confidence Interval… • We can compute a 95% confidence interval estimate for the difference in mean assembly times as: • That is, we estimate the mean difference between the two assembly methods between –.36 and .96 minutes. Note: zero is included in this confidence interval…

  29. Terminology… • If all the observations in one sample appear in one column and all the observations of the second sample appear in another column, the data is unstacked. If all the data from both samples is in the same column, the data is said to be stacked.

  30. Identifying Factors I… • Factors that identify the equal-variances t-test and estimator of :

  31. Identifying Factors II… • Factors that identify the unequal-variances t-test and estimator of :

  32. Matched Pairs Experiment… • Previously when comparing two populations, we examined independent samples. • If, however, an observation in one sample is matched with an observation in a second sample, this is called a matched pairs experiment. • To help understand this concept, let’s consider example 13.4

  33. Example 13.4… • Is there a difference between starting salaries offered to MBA grads going into Finance vs. Marketing careers? More precisely, are Finance majors offered higher salaries than Marketing majors? • In this experiment, MBAs are grouped by their GPA into 25 groups. Students from the same group (but with different majors) were selected and their highest salary offer recorded. • Here’s how the data looks…

  34. Example 13.4… • The numbers in black are the original starting salary data; the number in blue were calculated. although a student is either in Finance OR in Marketing (i.e. independent), that the data is grouped in this fashion makes it a matched pairs experiment (i.e. the two students in group #1 are ‘matched’ by their GPA range the difference of the means is equal to the mean of the differences, hence we will consider the “mean of the paired differences” as our parameter of interest:

  35. Example 13.4… IDENTIFY • Do Finance majors have higher salary offers than Marketing majors? • Since: • We want to research this hypothesis: H1: • (and our null hypothesis becomes H0: )

  36. Test Statistic for • The test statistic for the mean of the population of differences ( ) is: • which is Student t distributed with nD–1 degrees of freedom, provided that the differences are normally distributed. • Thus our rejection region becomes:

  37. Example 13.4… COMPUTE • From the data, we calculate… • …which in turn we use • for our t-statistic… • …which we compare to our critical value of t:

  38. Example 13.4… INTERPRET • Since our calculated value of t (3.81) is greater than our critical value of t (1.711), it falls in the rejection region, hence we reject H0 in favor of H1; that is, there is overwhelming evidence (since the p-value = .0004) that Finance majors do obtain higher starting salary offers than their peers in Marketing. Compare…

  39. Confidence Interval Estimator for • We can derive the confidence interval estimator for • algebraically as: • In the previous example, what is the 95% confidence interval estimate of the mean difference in salary offers between the two business majors? • That is, the mean of the population differences is between LCL=2,321 and UCL=7,809 dollars.

  40. Identifying Factors… • Factors that identify the t-test and estimator of :

  41. Inference about the ratio of two variances • So far we’ve looked at comparing measures of central location, namely the mean of two populations. • When looking at two population variances, we consider the ratio of the variances, i.e. the parameter of interest to us is: • The sampling statistic: is F distributed with • degrees of freedom.

  42. Inference about the ratio of two variances • Our null hypothesis is always: • H0: • (i.e. the variances of the two populations will be equal, hence their ratio will be one) • Therefore, our statistic simplifies to:

  43. Example 13.6… IDENTIFY • In example 13.1, we looked at the variances of the samples of people who consumed high fiber cereal and those who did not and assumed they were not equal. We can use the ideas just developed to test if this is in fact the case. • We want to show: H1: • (the variances are not equal to each other) • Hence we have our null hypothesis: H0:

  44. Example 13.6… CALCULATE • Since our research hypothesis is: H1: • We are doing a two-tailed test, and our rejection region is: F

  45. Example 13.6… CALCULATE • Our test statistic is: • Hence there is sufficient evidence to reject the null hypothesis in favor of the alternative; that is, there is a difference in the variance between the two populations. F .58 1.61

  46. Example 13.6… INTERPRET • We may need to work with the Excel output before drawing conclusions… Our research hypothesis H1: requires two-tail testing, but Excel only gives us values for one-tail testing… If we double the one-tail p-value Excel gives us, we have the p-value of the test we’re conducting (i.e. 2 x 0.0004 = 0.0008). Refer to the text and CD Appendices for more detail.

  47. Example 13.6… CALCULATE • If we wanted to determine the 95% confidence interval estimate of the ratio of the two population variances in Example 13.1, we would proceed as follows… • The confidence interval estimator for , is:

  48. Example 13.6… CALCULATE • The 95% confidence interval estimate of the ratio of the two population variances in Example 13.1 is: • That is, we estimate that lies between .2388 and .6614 • Note that one (1.00) is not within this interval…

  49. Identifying Factors • Factors that identify the F-test and estimator of :

  50. Difference Between Two Population Proportions • We will now look at procedures for drawing inferences about the difference between populations whose data are nominal (i.e. categorical). • As mentioned previously, with nominal data, calculate proportions of occurrences of each type of outcome. Thus, the parameter to be tested and estimated in this section is the difference between two population proportions: p1–p2.

More Related